You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
@@ -28,7 +28,7 @@ This page shows you how to create and manage a Serverless Containers namespace.
28
28
3. Complete the following steps in the wizard:
29
29
- Enter a **name**, and optionally a **description** for your namespace. The name must only contain alphanumeric characters and dashes.
30
30
- Choose a **region**, which is the geographical location in which your namespace will be deployed.
31
-
- Enter any **environment variables** required for your namespace. Environment variables configured in a namespace will be available in all containers/apps within the same namespace. For each environment variable, click **+Add new variable** and enter the key / value pair.
31
+
- Enter any **environment variables** required for your namespace. Environment variables configured in a namespace will be available in all containers/apps within the same namespace. For each environment variable, click **+Add new variable** and enter the key / value pair.
32
32
- Set secret environment variables (optional). **Secrets** are environment variables that are injected into your container and stored securely, but not displayed in the console after initial validation. Add a **key** and a **value**.
33
33
- Verify the **estimated cost**.
34
34
4. Click **Create namespace only** to finish, or click **Create namespace and add container** if you want to [deploy a container](/serverless-containers/how-to/deploy-container/) next.
@@ -33,19 +33,19 @@ Once the Function is built into an image, it will be pushed to [Container Regist
33
33
Cold Start is the time a function takes to handle a request when it is called for the first time.
34
34
35
35
The startup process steps are:
36
-
* Downloading the container image (which contains the built Function) to our infrastructure
36
+
* Downloading the container image (which contains the built function) to our infrastructure
37
37
* Starting the container and the runtime
38
38
* Waiting for the container to be ready.
39
39
40
-
[How to reduce cold starts](/serverless-functions/faq/#how-to-reduce-cold-start-of-serverless-functions)
40
+
Refer to the dedicated FAQ on [how to reduce cold starts](/serverless-functions/faq/#how-to-reduce-cold-start-of-serverless-functions) for more information.
41
41
42
42
## Concurrency
43
43
44
44
Concurrency defines the capacity of a resource to process several requests at the same time. A single instance of a function has a concurrency of `1` as it handles requests sequentially, one by one, but a Serverless Function can have several instances running at the same time, depending on its [autoscaling](#autoscaling) configuration.
45
45
46
46
## Container Registry
47
47
48
-
Container Registry is the place where the images of your Serverless Functions are stored before being deployed.
48
+
[Container Registry](/container-registry/) is the place where the images of your Serverless Functions are stored before being deployed.
49
49
50
50
## CRON trigger
51
51
@@ -85,15 +85,15 @@ The Serverless infrastructure manages incoming request traffic. In scenarios lik
85
85
86
86
## Logging
87
87
88
-
Serverless offers a built-in logging system based on Scaleway Cockpit to track the activity of your resources: see [monitoring Serverless Functions](/serverless-functions/how-to/monitor-function/).
88
+
Serverless offers a built-in logging system based on Scaleway Cockpit to track the activity of your resources. Refer to [monitoring Serverless Functions](/serverless-functions/how-to/monitor-function/) for more information.
89
89
90
90
## Max scale
91
91
92
-
This parameter sets the maximum number of function instances. You should adjust it based on your function's traffic spikes, keeping in mind that you may wish to limit the max scale to manage costs effectively.
92
+
This parameter sets the maximum number of function instances. You should adjust it based on your function's traffic spikes, keeping in mind that you may wish to limit the maximum scale to manage costs effectively.
93
93
94
94
## Metrics
95
95
96
-
Performance metrics for your Serverless resources are natively available: see [monitoring Serverless Functions](/serverless-functions/how-to/monitor-function/)).
96
+
Performance metrics for your Serverless resources are natively available. Refer to [monitoring Serverless Functions](/serverless-functions/how-to/monitor-function/) for more information.
97
97
98
98
## Min scale
99
99
@@ -124,7 +124,7 @@ Refer to the [dedicated FAQ](/serverless-functions/faq/#how-can-i-configure-acce
124
124
125
125
A queue trigger is a mechanism that connects a function to a queue created with [Scaleway Queues](/queues/concepts/#scaleway-queues), and invokes the function automatically whenever a message is added to the queue.
126
126
127
-
For each message that is sent to a queue, the trigger reads the message and invokes the associated function with the message as the input parameter.
127
+
For each message that is sent to a queue, the trigger reads the message, and invokes the associated function with the message as the input parameter.
128
128
The function can then process the message and perform any required actions, such as updating a database or sending a notification.
129
129
130
130
## Request timeout
@@ -141,7 +141,7 @@ When deploying a new version of a Serverless Function, a rolling update is appli
141
141
Here is how it works:
142
142
143
143
* When a new version of your function is deployed, the platform automatically starts routing traffic to the new version incrementally, while still serving requests from the old version until the new one is fully deployed.
144
-
* Once the new version is successfully running, we gradually shift all traffic to it, ensuring zero downtime.
144
+
* Once the new version is successfully running, the platform gradually shifts all traffic to it, ensuring zero downtime.
145
145
* The old version is decommissioned once the new version is fully serving traffic.
146
146
147
147
This process ensures a seamless update experience, minimizing user disruption during deployments. If needed, you can also manage traffic splitting between versions during the update process, allowing you to test new versions with a subset of traffic before fully migrating to them.
@@ -160,7 +160,7 @@ Refer to the [dedicated documentation](/serverless-functions/reference-content/f
160
160
161
161
## Scale to zero
162
162
163
-
One of the advantages of Serverless Functions is that when your function is not triggered, it does not consume any resources, which allows for significant savings.
163
+
When provisioned with a [minimum scale](#min-scale)of `0`, Serverless Functions scale down to zero active instances as long as they are not triggered. While idling, they do not consume any resources, which allows to reduce the cost of your infrastructure.
0 commit comments