diff --git a/faq/serverless-containers.mdx b/faq/serverless-containers.mdx
index 7aaff03bb7..d16078b58d 100644
--- a/faq/serverless-containers.mdx
+++ b/faq/serverless-containers.mdx
@@ -10,6 +10,49 @@ category: serverless
productIcon: ContainersProductIcon
---
+## What is serverless computing, and how does it differ from traditional cloud hosting?
+
+Serverless computing is a cloud execution model where the cloud provider dynamically manages the allocation of compute resources. Unlike traditional hosting models, you do not need to provision, scale, or maintain servers. Instead, you focus solely on writing and deploying your code, and the infrastructure scales automatically to meet demand.
+
+## Why consider using Serverless Containers, Functions, or Jobs for my projects?
+
+These services allow you to build highly scalable, event-driven, and pay-as-you-go solutions. Serverless Containers and Functions help you create applications and microservices without worrying about server management, while Serverless Jobs lets you run large-scale, parallel batch-processing tasks efficiently. This can lead to faster development cycles, reduced operational overhead, and cost savings.
+
+## Can I run any application on Serverless Containers?
+
+Yes. Because Serverless Containers supports any containerized application, you can choose the language, runtime, and framework that best suits your needs. As long as it can run in a container and respond to HTTP requests, Serverless Containers can host it.
+
+## What are the cost benefits of using serverless services like Serverless Containers?
+
+With serverless, you only pay for the computing resources you use. There are no upfront provisioning costs or paying for idle capacity. When your application traffic is low, the cost scales down, and when traffic spikes, the platform automatically scales up, ensuring you never overpay for unused resources.
+
+## Can updates of Serverless Containers cause downtime?
+
+No, deploying a new version of your Serverless Container generates a **rolling update**. This means that a new version of the service is gradually
+rolled out to your users without downtime. Here is how it works:
+
+* When a new version of your container is deployed, the platform automatically starts routing traffic to the new version incrementally, while still serving requests from the old version until the new one is fully deployed.
+* Once the new version is successfully running, we gradually shift all traffic to it, ensuring zero downtime.
+* The old version is decommissioned once the new version is fully serving traffic.
+
+This process ensures a seamless update experience, minimizing user disruption during deployments. If needed, you can also manage traffic splitting between versions during the update process, allowing you to test new versions with a subset of traffic before fully migrating to it.
+
+## Can I upgrade Serverless Container resources (vCPU and RAM) at any time?
+
+Yes, Serverless Containers resources can be changed at any time without causing downtime - see the previous question for full details.
+
+## How does scaling work in these serverless services?
+
+Scaling in Serverless Containers and Serverless Functions is handled automatically by the platform. When demand increases - more requests or events - the platform spins up additional instances to handle the load. When demand decreases, instances spin down. This ensures optimal performance without manual intervention.
+
+## How do I integrate my serverless solutions with other Scaleway services?
+
+Integration is straightforward. Serverless Functions and Containers can be triggered by events from [Queues](/serverless/messaging/concepts/#queues) and [Topics and Events](/serverless/messaging/concepts/#topics-and-events), and can easily communicate with services like [Managed Databases](/managed-databases/) or [Serverless databases](/serverless/sql-databases/). [Serverless Jobs](/serverless/jobs/) can pull data from [Object Storage](/storage/object), or output processed results into a database. With managed connectors, APIs, and built-in integrations, linking to the broader Scaleway ecosystem is seamless.
+
+## Can I migrate existing applications to Serverless Containers?
+
+Yes. Many traditional applications can be containerized and deployed to Serverless Containers. This makes it easier to modernize legacy systems without a complete rewrite. By moving to a serverless platform, you gain automatic scaling, reduced operational overhead, and a simpler infrastructure management experience.
+
## Are applications deployed on Serverless Containers stateless?
Yes, all applications deployed on Serverless Containers are stateless. This means the server does not store any state about the client session. Instead, the session data is stored on the client and passed to the server as needed.
@@ -143,4 +186,28 @@ solutions like Scaleway Object Storage.
## Why does my container have an instance running after deployment, even with min-scale 0?
Currently, a new container instance will always start after each deployment, even if there is no traffic and the minimum
-scale is set to 0. This behavior is not configurable at this time.
\ No newline at end of file
+scale is set to 0. This behavior is not configurable at this time.
+
+## How can I store data in my Serverless resource?
+
+Serverless resources are by default [stateless](/serverless/containers/concepts/#stateless), local storage is ephemeral.
+
+For some use cases, such as saving analysis results, exporting data etc., it can be important to save data. Serverless resources can be connected to other resources from the Scaleway ecosystem for this purpose:
+
+### Databases
+
+* [Serverless Databases](/serverless/sql-databases/): Go full serverless and take the complexity out of PostgreSQL database operations.
+* [Managed MySQL / PostgreSQL](/managed-databases/postgresql-and-mysql/): Ensure scalability of your infrastructure and storage with our new generation of Managed Databases designed to scale on-demand according to your needs.
+* [Managed Database for Redis®](/managed-databases/redis/): Fully managed Redis®* in seconds.
+* [Managed MongoDB®](/managed-databases/mongodb/): Get the best of MongoDB® and Scaleway in one database.
+
+### Storage
+
+* [Object Storage](/storage/object/): Multi-AZ resilient object storage service ensuring high availability for your data.
+* [Scaleway Glacier](/storage/object/): Our outstanding Cold Storage class to secure long-term object storage. Ideal for deep archived data.
+
+
+Explore all Scaleway products in the console and select the right product for your use case.
+
+Further integrations are also possible even if not listed above, for example, [Secret Manager](/identity-and-access-management/secret-manager/) can help you to store information that requires versioning.
+
diff --git a/faq/serverless-functions.mdx b/faq/serverless-functions.mdx
index df659c91aa..4136970e0e 100644
--- a/faq/serverless-functions.mdx
+++ b/faq/serverless-functions.mdx
@@ -10,6 +10,41 @@ category: serverless
productIcon: FunctionsProductIcon
---
+## What is serverless computing, and how does it differ from traditional cloud hosting?
+
+Serverless computing is a cloud execution model where the cloud provider dynamically manages the allocation of compute resources. Unlike traditional hosting models, you do not need to provision, scale, or maintain servers. Instead, you focus solely on writing and deploying your code, and the infrastructure scales automatically to meet demand.
+
+## Why consider using Serverless Containers, Functions, or Jobs for my projects?
+
+These services allow you to build highly scalable, event-driven, and pay-as-you-go solutions. Serverless Containers and Functions help you create applications and microservices without worrying about server management, while Serverless Jobs lets you run large-scale, parallel batch-processing tasks efficiently. This can lead to faster development cycles, reduced operational overhead, and cost savings.
+
+## What are the cost benefits of using serverless services like Serverless Containers and Serverless Functions?
+
+With serverless, you only pay for the computing resources you use. There are no upfront provisioning costs or paying for idle capacity. When your application traffic is low, the cost scales down, and when traffic spikes, the platform automatically scales up, ensuring you never overpay for unused resources.
+
+## How does scaling work in these serverless services?
+
+Scaling in Serverless Containers and Serverless Functions is handled automatically by the platform. When demand increases - more requests or events - the platform spins up additional instances to handle the load. When demand decreases, instances spin down. This ensures optimal performance without manual intervention.
+
+## Can updates of Serverless Function cause downtime?
+
+No, deploying a new version of your Serverless Function generates a **rolling update**. This means that a new version of the service is gradually
+rolled out to your users without downtime. Here is how it works:
+
+* When a new version of your function is deployed, the platform automatically starts routing traffic to the new version incrementally, while still serving requests from the old version until the new one is fully deployed.
+* Once the new version is successfully running, we gradually shift all traffic to it, ensuring zero downtime.
+* The old version is decommissioned once the new version is fully serving traffic.
+
+This process ensures a seamless update experience, minimizing user disruption during deployments. If needed, you can also manage traffic splitting between versions during the update process, allowing you to test new versions with a subset of traffic before fully migrating to them.
+
+## Can I upgrade Serverless Function resources (vCPU and RAM) at any time?
+
+Yes, Serverless Functions resources can be changed at any time without causing downtime, see the previous question for full details.
+
+## How do I integrate my serverless solutions with other Scaleway services?
+
+Integration is straightforward. Serverless Functions and Containers can be triggered by events from [Queues](/serverless/messaging/concepts/#queues) and [Topics and Events](/serverless/messaging/concepts/#topics-and-events), and can easily communicate with services like [Managed Databases](/managed-databases/) or [Serverless databases](/serverless/sql-databases/). [Serverless Jobs](/serverless/jobs/) can pull data from [Object Storage](/storage/object), or output processed results into a database. With managed connectors, APIs, and built-in integrations, linking to the broader Scaleway ecosystem is seamless.
+
## How am I billed for Serverless Functions?
### Principle
@@ -184,3 +219,27 @@ solutions like Scaleway Object Storage.
Currently, a new function instance will always start after each deployment, even if there is no traffic and the minimum
scale is set to 0. This behavior is not configurable at this time.
+
+## How can I store data in my Serverless resource?
+
+Serverless resources are by default [stateless](/serverless/functions/concepts/#stateless), local storage is ephemeral.
+
+For some use cases, such as saving analysis results, exporting data etc., it can be important to save data. Serverless resources can be connected to other resources from the Scaleway ecosystem for this purpose:
+
+### Databases
+
+* [Serverless Databases](/serverless/sql-databases/): Go full serverless and take the complexity out of PostgreSQL database operations.
+* [Managed MySQL / PostgreSQL](/managed-databases/postgresql-and-mysql/): Ensure scalability of your infrastructure and storage with our new generation of Managed Databases designed to scale on-demand according to your needs.
+* [Managed Database for Redis®](/managed-databases/redis/): Fully managed Redis®* in seconds.
+* [Managed MongoDB®](/managed-databases/mongodb/): Get the best of MongoDB® and Scaleway in one database.
+
+### Storage
+
+* [Object Storage](/storage/object/): Multi-AZ resilient object storage service ensuring high availability for your data.
+* [Scaleway Glacier](/storage/object/): Our outstanding Cold Storage class to secure long-term object storage. Ideal for deep archived data.
+
+
+Explore all Scaleway products in the console and select the right product for your use case.
+
+Some products are not listed but for example, on specific use cases Secret Manager can help you to store informations that requires versioning.
+
\ No newline at end of file
diff --git a/faq/serverless-jobs.mdx b/faq/serverless-jobs.mdx
index 93d628c219..c902940bee 100644
--- a/faq/serverless-jobs.mdx
+++ b/faq/serverless-jobs.mdx
@@ -10,9 +10,27 @@ category: serverless
productIcon: ServerlessJobsProductIcon
---
-## What are Serverless Jobs?
+## What is serverless computing, and how does it differ from traditional cloud hosting?
-Scaleway Serverless Jobs is a fully managed service that enables efficient execution of batch computing workloads. It automates management tasks, allowing users to run large-scale batch jobs with ease.
+Serverless computing is a cloud execution model where the cloud provider dynamically manages the allocation of compute resources. Unlike traditional hosting models, you do not need to provision, scale, or maintain servers. Instead, you focus solely on writing and deploying your code, and the infrastructure scales automatically to meet demand.
+
+## Why consider using Serverless Containers, Functions, or Jobs for my projects?
+
+These services allow you to build highly scalable, event-driven, and pay-as-you-go solutions. Serverless Containers and Functions help you create applications and microservices without worrying about server management, while Serverless Jobs lets you run large-scale, parallel batch-processing tasks efficiently. This can lead to faster development cycles, reduced operational overhead, and cost savings.
+
+## What is Serverless Jobs, and when should I use it?
+
+Serverless Jobs allows you to run large-scale batch processing and computational workloads in a fully managed environment. If you have tasks like data processing, machine learning training jobs, simulations, or large-scale analytics that can be parallelized, Serverless Jobs helps you orchestrate and manage those workloads seamlessly.
+
+## How do I integrate my serverless solutions with other Scaleway services?
+
+Integration is straightforward. Serverless Functions and Containers can be triggered by events from [Queues](/serverless/messaging/concepts/#queues) and [Topics and Events](/serverless/messaging/concepts/#topics-and-events), and can easily communicate with services like [Managed Databases](/managed-databases/) or [Serverless databases](/serverless/sql-databases/). [Serverless Jobs](/serverless/jobs/) can pull data from [Object Storage](/storage/object), or output processed results into a database. With managed connectors, APIs, and built-in integrations, linking to the broader Scaleway ecosystem is seamless.
+
+## Can I update Serverless Jobs resources (vCPU and RAM) at any time?
+
+Yes, resources of your Job Definition can be updated at any time.
+
+Ongoing Job Runs will remain using the resources definied at the time it started.
## How am I billed for Serverless Jobs?
@@ -124,3 +142,26 @@ To add network restrictions on your resource, consult the [list of prefixes used
## Can I securely use sensitive information with Serverless Jobs?
Yes, you can use sensitive data such as API secret keys, passwords, TLS/SSL certificates, or tokens. Serverless Jobs seamlessly integrates with [Secret Manager](/identity-and-access-management/secret-manager/), which allows you to securely reference sensitive information within your jobs. Refer to the [dedicated documentation](/serverless/jobs/how-to/reference-secret-in-job/) for more information.
+
+## How can I store data in my Serverless resource?
+
+Serverless resources are by default [stateless](/serverless/functions/concepts/#stateless), local storage is ephemeral.
+
+For some use cases, such as saving analysis results, exporting data etc., it can be important to save data. Serverless resources can be connected to other resources from the Scaleway ecosystem for this purpose:
+
+### Databases
+
+* [Serverless Databases](/serverless/sql-databases/): Go full serverless and take the complexity out of PostgreSQL database operations.
+* [Managed MySQL / PostgreSQL](/managed-databases/postgresql-and-mysql/): Ensure scalability of your infrastructure and storage with our new generation of Managed Databases designed to scale on-demand according to your needs.
+* [Managed MongoDB®](/managed-databases/mongodb/): Get the best of MongoDB® and Scaleway in one database.
+
+### Storage
+
+* [Object Storage](/storage/object/): Multi-AZ resilient object storage service ensuring high availability for your data.
+* [Scaleway Glacier](/storage/object/): Our outstanding Cold Storage class to secure long-term object storage. Ideal for deep archived data.
+
+
+Explore all Scaleway products in the console and select the right product for your use case.
+
+Some products are not listed but for example, on specific use cases Secret Manager can help you to store informations that requires versioning.
+
\ No newline at end of file
diff --git a/serverless/containers/concepts.mdx b/serverless/containers/concepts.mdx
index 6da5cd9021..8f836aa527 100644
--- a/serverless/containers/concepts.mdx
+++ b/serverless/containers/concepts.mdx
@@ -39,11 +39,11 @@ A container image is a file that includes all the requirements and instructions
## Container Registry
-Container Registry is the place where your images are stored before being deployed, we recommend using Scaleway Container Registry for optimal integration. [Migration guide](/serverless/containers/api-cli/migrate-external-image-to-scaleway-registry/).
+Container Registry is the place where your images are stored before being deployed. We recommend using Scaleway Container Registry for optimal integration. See the [migration guide](/serverless/containers/api-cli/migrate-external-image-to-scaleway-registry/) for full details.
## CRON trigger
-A CRON trigger is a mechanism used to automatically invoke a Serverless Function at a specific time on a recurring schedule.
+A CRON trigger is a mechanism used to automatically invoke a Serverless Container at a specific time on a recurring schedule.
It works similarly to a traditional Linux [cron job](https://en.wikipedia.org/wiki/Cron), using the `* * * * *` format, and uses the **UTC** time zone. Refer to our [cron schedules reference](/serverless/containers/reference-content/cron-schedules/) for more information.
@@ -95,10 +95,18 @@ JWT (JSON Web Token) is an access token you can create from the console or API t
The Serverless infrastructure manages incoming request traffic. In scenarios like sudden traffic spikes or load testing, resources are automatically scaled based on the max scale parameter to handle the load.
+## Logging
+
+Serverless Containers offers a built-in logging system based on Cockpit to track the activity of your resources: see [monitoring Serverless Containers](/serverless/containers/how-to/monitor-container/).
+
## Max scale
This parameter sets the maximum number of container instances. You should adjust it based on your container's traffic spikes, keeping in mind that you may wish to limit the max scale to manage costs effectively.
+## Metrics
+
+Performance metrics for your Serverless resources are natively available: see [monitoring Serverless Containers](/serverless/containers/how-to/monitor-container/).
+
## Min scale
Customizing the minimum scale for Serverless can help ensure that an instance remains pre-allocated and ready to handle requests, reducing delays associated with cold starts. However, this setting also impacts the costs of your Serverless Container.
@@ -126,6 +134,10 @@ The port parameter specifies the network port that your container listens on for
The value defined in the port parameter will then be passed to your container during the deployment inside the `PORT` environment variable.
+
+Only one HTTP port can be exposed per Serverless Container.
+
+
## Privacy policy
A container's privacy policy defines whether a container may be invoked anonymously (**public**) or only via an authentication mechanism provided by the [Scaleway API](https://www.scaleway.com/en/developers/api/serverless-containers/#authentication) (**private**).
@@ -160,6 +172,12 @@ Serverless allows you to deploy your Functions (FaaS) and Containerized Applicat
Serverless.com (Serverless Framework) is a tool that allows you to deploy serverless applications without having to manage Serverless Container's API call. Write and deploy a YAML configuration file, everything else is handled automatically, even the image building.
+## Serverless Function
+
+Serverless Functions are serverless, fully managed compute services that allow you to run small, stateless code snippets or functions in response to HTTP requests or events.
+
+These functions automatically scale based on demand and are designed to be lightweight, event-driven, and easily deployable, eliminating the need to worry about infrastructure management. Functions is built on top of Serverless Containers, meaning you can run your functions packaged in containers and have them scale efficiently.
+
## Serverless Job
Serverless Jobs are similar to Serverless Containers but are better suited for running longer workloads. See [the comparaison between Serverless products](/serverless/containers/reference-content/difference-jobs-functions-containers) for more information.
@@ -171,6 +189,17 @@ A queue trigger is a mechanism that connects a container to a queue created with
For each message that is sent to a queue, the trigger reads the message and invokes the associated container with the message as the input parameter.
The container can then process the message and perform any required actions, such as updating a database or sending a notification.
+## Rolling update
+
+When deploying a new version of a Serverless Container, a rolling update is applied by default. This means that the new version of the service is gradually rolled out to your users without downtime.
+Here is how it works:
+
+* When a new version of your container is deployed, the platform automatically starts routing traffic to the new version incrementally, while still serving requests from the old version until the new one is fully deployed.
+* Once the new version is successfully running, we gradually shift all traffic to it, ensuring zero downtime.
+* The old version is decommissioned once the new version is fully serving traffic.
+
+This process ensures a seamless update experience, minimizing user disruption during deployments. If needed, you can also manage traffic splitting between versions during the update process, allowing you to test new versions with a subset of traffic before fully migrating to it
+
## Stateless application
A stateless application is a computer program that does not save client data between sessions. Data generated in one session is not saved for use in the next session with that client.
@@ -184,6 +213,14 @@ A Serverless Container can have the following statuses:
* **Pending**: your resource is under deployment.
* **Error**: something went wrong during the deployment process. [Check our troubleshooting documentation](/serverless/containers/troubleshooting/cannot-deploy-image) to solve the issue.
+## Stateless
+
+Refers to a system or application that does not maintain any persistent state between executions. In a stateless environment, each request or operation is independent, and no information is retained from previous interactions.
+
+This means that each request is treated as a new and isolated event, and there is no need for the system to remember previous states or data once a task is completed. Statelessness is commonly used in serverless architectures where each function execution is independent of others.
+
+To store data you can use [Scaleway Object Storage](/storage/object/), [Scaleway Managed Databases](/managed-databases/postgresql-and-mysql/), and [Scaleway Serverless Databases](/serverless/sql-databases/).
+
## Terraform
Terraform is a tool for managing infrastructure using code. [Read the Terraform documentation for Serverless Containers](https://registry.terraform.io/providers/scaleway/scaleway/latest/docs/resources/container).
diff --git a/serverless/containers/reference-content/containers-limitations.mdx b/serverless/containers/reference-content/containers-limitations.mdx
index 343fb7100d..f1cececfa2 100644
--- a/serverless/containers/reference-content/containers-limitations.mdx
+++ b/serverless/containers/reference-content/containers-limitations.mdx
@@ -52,20 +52,25 @@ During the execution of the container, if the limits are exceeded, a restart occ
In order to ensure the proper functioning of the product, we restrict the use of certain ports and environment variables
-* Blocked ports:
- * **25**: Due to potential abuse (spam), no outbound traffic is allowed through this port, except from Scaleway Transactional Email SMTP servers.
- * **465**: Due to potential abuse (spam), no outbound traffic is allowed through this port, except from Scaleway Transactional Email SMTP servers.
-* Unavailable custom ports
- Do not make your containers listen on these ports which are used by our service.
- * 8008
- * 8012
- * 8013
- * 8022
- * 9090
- * 9091
-* Reserved environment variables:
- * `PORT`: Value of the port defined in the Container settings, which the container has to listen on. You can use this environment variable inside your Container for easier deployments.
- * `SCW_*`: Reserved for product configuration (for example: token validation)
+### Blocked ports
+
+Due to potential abuse (spam), no outbound traffic is allowed through the following ports, except from Scaleway Transactional Email SMTP servers.
+
+* **25**
+* **465**
+
+### Unavailable custom ports
+Do not have your containers listen on these ports, as they are used by our service.
+* 8008
+* 8012
+* 8013
+* 8022
+* 9090
+* 9091
+
+### Reserved environment variables
+* `PORT`: Value of the port defined in the Container settings, which the container has to listen on. You can use this environment variable inside your Container for easier deployments.
+* `SCW_*`: Reserved for product configuration (for example: token validation)
## Default values for CPU and memory limits
diff --git a/serverless/functions/concepts.mdx b/serverless/functions/concepts.mdx
index 134ddc89ac..bf77ce4cdc 100644
--- a/serverless/functions/concepts.mdx
+++ b/serverless/functions/concepts.mdx
@@ -12,6 +12,12 @@ categories:
- serverless
---
+## Build step
+
+Before deploying Serverless Functions, they have to be built. This step occurs during deployment.
+
+Once the Function is built into an image, it will be pushed to [Container Registry](#container-registry)
+
## Cold Start
Cold Start is the time a function takes to handle a request when it is called for the first time.
@@ -23,33 +29,67 @@ Startup process steps are:
[How to reduce cold starts](/faq/serverless-functions/#how-to-reduce-cold-start-of-serverless-functions)
+## Container Registry
+
+Container Registry is the place where your images of your Serverless Functions are stored before being deployed.
+
## CRON trigger
A CRON trigger is a mechanism used to automatically invoke a Serverless Function at a specific time on a recurring schedule. It works similarly to a traditional Linux [cron job](https://en.wikipedia.org/wiki/Cron), using the `* * * * *` format, and uses the **UTC** time zone. Refer to our [cron schedules reference](/serverless/functions/reference-content/cron-schedules/) for more information.
-## Environment variables
+## Custom domain
-An environment variable is a variable whose value is set outside the program, typically through functionality built into the operating system or microservice. An environment variable is made up of a name/value pair, and any number may be created and available for reference at a point in time.
+By default, a generated endpoint is assigned to your Serverless resource. Custom domains allows you to use your own domain - see our [custom domain documentation](/serverless/functions/how-to/add-a-custom-domain-to-a-function) for full details.
-## Function
+## Endpoint
-A function defines a procedure on how to change one element into another. The function remains static, while the variables that pass through it can vary.
+An endpoint is the URL generated to access your resource. It can be customized with [custom domains](#custom-domain).
+
+## Environment variables
+
+Environment variables are key/value pairs injected in your container. They are useful for sharing information such as configurations with your container. Some names are reserved. [See details on reserved names](/serverless/functions/reference-content/functions-limitations/#configuration-restrictions).
## GB-s
-Unit used to measure the resource consumption of a function. It reflects the amount of memory consumed over time.
+Unit used to measure the resource consumption of a Serverless Function. It reflects the amount of memory consumed over time.
## JWT Token
-A JWT (JSON Web Token) is an access token you can create from the console or the API to enable an application to access your Private function.
+JWT (JSON Web Token) is an access token you can create from the console or API to enable an application to access your private container. [Find out how to secure a Function](/serverless/functions/how-to/secure-a-function/#restrict-access-to-your-functions).
## Handler
A handler is a routine/function/method that processes specific events. Upon invoking your function, the handler is executed and returns an output. Refer to our [dedicated documentation](/serverless/functions/reference-content/functions-handlers/) for more information on the structure of a handler.
+## Instance
+
+A Serverless Function instance handles incoming requests based on factors like the request volume, min scale, and max scale parameters.
+
+## Load balancing
+
+The Serverless infrastructure manages incoming request traffic. In scenarios like sudden traffic spikes or load testing, resources are automatically scaled based on the max scale parameter to handle the load.
+
+## Logging
+
+Serverless offers a built-in logging system based on Scaleway Cockpit to track the activity of your resources: see [monitoring Serverless Functions](/serverless/functions/how-to/monitor-function/).
+
+## Max scale
+
+This parameter sets the maximum number of function instances. You should adjust it based on your function's traffic spikes, keeping in mind that you may wish to limit the max scale to manage costs effectively.
+
+## Metrics
+
+Performance metrics for your Serverless resources are natively available: see [monitoring Serverless Functions](/serverless/functions/how-to/monitor-function/)).
+
+## Min scale
+
+Customizing the minimum scale for Serverless can help ensure that an instance remains pre-allocated and ready to handle requests, reducing delays associated with cold starts. However, this setting also impacts the costs of your Serverless Function.
+
## Namespace
-A namespace is a project that allows you to [group your functions](/serverless/functions/how-to/create-manage-delete-functions-namespace/). Functions in the same namespace can share environment variables and access tokens, defined at the namespace level.
+A namespace is a project that allows you to [group your functions](/serverless/functions/how-to/create-manage-delete-functions-namespace/).
+
+Functions in the same namespace can share environment variables and access tokens, defined at the namespace level.
## NATS trigger
@@ -62,6 +102,24 @@ The function can then process the message and perform any required actions, such
A function's privacy policy defines whether a function may be executed anonymously (**public**) or only via an authentication mechanism provided by the [Scaleway API](https://www.scaleway.com/en/developers/api/serverless-functions/#authentication) (**private**).
+## Queue trigger
+
+A queue trigger is a mechanism that connects a function to a queue created with [Scaleway Queues](/serverless/messaging/concepts/#queues), and invokes the function automatically whenever a message is added to the queue.
+
+For each message that is sent to a queue, the trigger reads the message and invokes the associated function with the message as the input parameter.
+The function can then process the message and perform any required actions, such as updating a database or sending a notification.
+
+## Rolling update
+
+When deploying a new version of a Serverless Function, a rolling update is applied by default. This means that the new version of the service is gradually rolled out to your users without downtime.
+Here is how it works:
+
+* When a new version of your function is deployed, the platform automatically starts routing traffic to the new version incrementally, while still serving requests from the old version until the new one is fully deployed.
+* Once the new version is successfully running, we gradually shift all traffic to it, ensuring zero downtime.
+* The old version is decommissioned once the new version is fully serving traffic.
+
+This process ensures a seamless update experience, minimizing user disruption during deployments. If needed, you can also manage traffic splitting between versions during the update process, allowing you to test new versions with a subset of traffic before fully migrating to them.
+
## Runtime
The runtime is the execution environment of your function. Regarding Serverless Function, it consists of the languages in which your code is written.
@@ -92,20 +150,31 @@ Serverless allows you to deploy your Functions (FaaS) and Containerized Applicat
Serverless.com (Serverless Framework) is a tool that enables the deployment of serverless applications without having to manage Serverless Function's API call. Just write your configuration in a YAML and deploy, it handles everything.
-## Serverless Functions
+## Serverless Function
-Serverless Functions simplify deploying applications to the Cloud. They only require you to install a piece of business logic, a “function”, on any cloud platform, which executes it on demand. This allows you to focus on backend code without provisioning or maintaining servers.
+Serverless Functions are serverless, fully managed compute services that allow you to run small, stateless code snippets or functions in response to HTTP requests or events.
-The platform also handles function availability and manages resource allocation for you. For instance, if the system needs to accommodate 100 simultaneous requests, it allocates 100 (or more) copies of your service. If demand drops to two concurrent requests, it destroys the unneeded ones.
+These functions automatically scale based on demand and are designed to be lightweight, event-driven, and easily deployable, eliminating the need to worry about infrastructure management. Functions is built on top of Serverless Containers, meaning you can run your functions packaged in containers and have them scale efficiently.
-You pay for the resources your functions use, and only when your functions need them.
+## Serverless Job
-## Queue trigger
+Serverless Jobs are similar to Serverless Functions but are better suited for running longer workloads. See [the comparaison between Serverless products](/serverless/functions/reference-content/difference-jobs-functions-containers) for more information.
-A queue trigger is a mechanism that connects a function to a queue created with [Scaleway Queues](/serverless/messaging/concepts/#queues), and invokes the function automatically whenever a message is added to the queue.
+## Stateless
+
+Refers to a system or application that does not maintain any persistent state between executions. In a stateless environment, each request or operation is independent, and no information is retained from previous interactions.
+
+This means that each request is treated as a new and isolated event, and there is no need for the system to remember previous states or data once a task is completed. Statelessness is commonly used in serverless architectures where each function execution is independent of others.
+
+To store data you can use [Scaleway Object Storage](/storage/object/), [Scaleway Managed Databases](/managed-databases/postgresql-and-mysql/), and [Scaleway Serverless Databases](/serverless/sql-databases/).
+
+## Status
+
+A Serverless Function can have the following statuses:
+* **Ready**: your Serverless Function is operational to serve requests.
+* **Pending**: your resource is under deployment.
+* **Error**: something went wrong during the deployment process or build of the source code to image. [Check our troubleshooting documentation](/serverless/functions/troubleshooting/function-in-error-state/) to solve the issue.
-For each message that is sent to a queue, the trigger reads the message and invokes the associated function with the message as the input parameter.
-The function can then process the message and perform any required actions, such as updating a database or sending a notification.
## Timeout
diff --git a/serverless/functions/reference-content/functions-limitations.mdx b/serverless/functions/reference-content/functions-limitations.mdx
index 61ee48c16c..7c19e236d8 100644
--- a/serverless/functions/reference-content/functions-limitations.mdx
+++ b/serverless/functions/reference-content/functions-limitations.mdx
@@ -50,11 +50,15 @@ If the limits are exceeded during the execution of the function, a restart occur
To ensure the proper functioning of the product, we restrict the use of certain ports and environment variables.
-* Blocked ports:
- * **25**: Due to potential abuse (spam), no outbound traffic is allowed through this port, except from Scaleway Transactional Email SMTP servers.
- * **465**: Due to potential abuse (spam), no outbound traffic is allowed through this port, except from Scaleway Transactional Email SMTP servers.
-* Reserved environment variables:
- * `SCW_*`: Reserved for product configuration (for example: token validation).
+### Blocked ports
+
+Due to potential abuse (spam), no outbound traffic is allowed through following ports, except from Scaleway Transactional Email SMTP servers.
+
+* **25**
+* **465**
+
+### Reserved environment variables:
+* `SCW_*`: Reserved for product configuration (for example: token validation).
## Versioning and rollback
diff --git a/serverless/jobs/concepts.mdx b/serverless/jobs/concepts.mdx
index 096c22c60e..e6352edae2 100644
--- a/serverless/jobs/concepts.mdx
+++ b/serverless/jobs/concepts.mdx
@@ -12,10 +12,9 @@ categories:
- serverless
---
-## Container image
+## Container Registry
-A container image is a file that includes all the requirements and instructions of a complete and executable version of an application.
-When running a job, the selected image will be pulled to execute your workload. Images can come from public external registries or from the [Scaleway Container Registry](/containers/container-registry/concepts/#container-registry).
+Container Registry is the place where your images are stored before being deployed. We recommend using Scaleway Container Registry for optimal integration. See the [migration guide](/serverless/containers/api-cli/migrate-external-image-to-scaleway-registry/) for full details.
## Environment variables
@@ -45,22 +44,46 @@ The name of a job is part of the [job definition](#job-definition) and is used f
A job run is the execution of a job definition. It can be in a running, succeeded, canceled, or failed status. Each job run has a unique identifier and can be individually monitored using [Cockpit](/observability/cockpit/quickstart/).
+## Logging
+
+Serverless offers a built-in logging system based on Scaleway Cockpit to track the activity of your resources: see [monitoring Serverless Jobs](/serverless/jobs/how-to/monitor-job/).
+
## Maximum duration
The maximum duration option allows you to define the maximum execution time before your job is automatically killed.
+## Metrics
+
+Performance metrics for your Serverless resources are natively available: see [monitoring Serverless Jobs](/serverless/jobs/how-to/monitor-job/).
+
## Schedule (cron)
A schedule (cron) is a mechanism used to automatically start a Serverless Job at a specific time on a recurring schedule. It works similarly to a traditional Linux cron job, using the `* * * * *` format. Refer to our [cron schedules reference](/serverless/jobs/reference-content/cron-schedules/) for more information.
## Secrets reference
-A secret reference is a mechanism that allows you to use a secret stored in [Secret Manager](/identity-and-access-management/secret-manager/) within Serverless Jobs. It allows you to securely reference sensitive data, such as API secret keys, passwords, tokens, or certificates.
+A secret reference is a mechanism that allows you to use a secret stored in [Secret Manager](/identity-and-access-management/secret-manager/) within Serverless Jobs. It allows you to securely reference sensitive data, such as API secret keys, passwords, tokens, or certificates.
## Startup command
This optional field allows you to specify a custom command executed upon starting your job if your container image does not have one already, or if you use a public container image.
+## Status
+
+A Serverless Job run can have the following statuses:
+* **Succeeded**: your Serverless Job run finished in a successful state.
+* **Queued**: your Serverless Job run is waiting for resources to run.
+* **Error**: your Serverless Job run finished with an error or timeout. [Check our troubleshooting documentation](/serverless/jobs/troubleshooting/job-in-error-state/) to solve the issue.
+* **Canceled**: your Serverless Job run has been canceled by the user.
+
+## Stateless
+
+Refers to a system or application that does not maintain any persistent state between executions. In a stateless environment, each request or operation is independent, and no information is retained from previous interactions.
+
+This means that each request is treated as a new and isolated event, and there is no need for the system to remember previous states or data once a task is completed. Statelessness is commonly used in serverless architectures where each function execution is independent of others.
+
+To store data you can use [Scaleway Object Storage](/storage/object/), [Scaleway Managed Databases](/managed-databases/postgresql-and-mysql/), and [Scaleway Serverless Databases](/serverless/sql-databases/).
+
## vCPU-s
Unit used to measure the resource consumption of a container. It reflects the amount of vCPU used over time.
\ No newline at end of file