diff --git a/serverless/containers/how-to/secure-a-container.mdx b/serverless/containers/how-to/secure-a-container.mdx
index 8164c90538..b29425faa3 100644
--- a/serverless/containers/how-to/secure-a-container.mdx
+++ b/serverless/containers/how-to/secure-a-container.mdx
@@ -57,7 +57,7 @@ secret:
Add the following [resource description](https://registry.terraform.io/providers/scaleway/scaleway/latest/docs/resources/container) in Terraform:
-```
+```hcl
secret_environment_variables = { "key" = "secret" }
```
diff --git a/tutorials/create-serverless-scraping/index.mdx b/tutorials/create-serverless-scraping/index.mdx
index 30fb27d7b6..67c5e842da 100644
--- a/tutorials/create-serverless-scraping/index.mdx
+++ b/tutorials/create-serverless-scraping/index.mdx
@@ -47,7 +47,7 @@ We start by creating the scraper program, or the "data producer".
SQS credentials and queue URL are read by the function from environment variables. Those variables are set by Terraform as explained in [one of the next sections](#create-a-terraform-file-to-provision-the-necessary-scaleway-resources). *If you choose another deployment method, such as the [console](https://console.scaleway.com/), do not forget to set them.*
```python
- queue_url = os.getenv('QUEUE_URL')
+ queue_url = os.getenv('QUEUE_URL')
sqs_access_key = os.getenv('SQS_ACCESS_KEY')
sqs_secret_access_key = os.getenv('SQS_SECRET_ACCESS_KEY')
```
@@ -65,10 +65,10 @@ We start by creating the scraper program, or the "data producer".
Using the AWS python sdk `boto3`, connect to the SQS queue and push the `title` and `url` of articles published less than 15 minutes ago.
```python
sqs = boto3.client(
- 'sqs',
- endpoint_url=SCW_SQS_URL,
- aws_access_key_id=sqs_access_key,
- aws_secret_access_key=sqs_secret_access_key,
+ 'sqs',
+ endpoint_url=SCW_SQS_URL,
+ aws_access_key_id=sqs_access_key,
+ aws_secret_access_key=sqs_secret_access_key,
region_name='fr-par')
for age, titleline in zip(ages, titlelines):
@@ -117,7 +117,7 @@ Next, let's create our consumer function. When receiving a message containing th
Lastly, we write the information into the database. *To keep the whole process completely automatic the* `CREATE_TABLE_IF_NOT_EXISTS` *query is run each time. If you integrate the functions into an existing database, there is no need for it.*
```python
conn = None
- try:
+ try:
conn = pg8000.native.Connection(host=db_host, database=db_name, port=db_port, user=db_user, password=db_password, timeout=15)
conn.run(CREATE_TABLE_IF_NOT_EXISTS)
@@ -136,7 +136,7 @@ As explained in the [Scaleway Functions documentation](/serverless/functions/how
## Create a Terraform file to provision the necessary Scaleway resources
-For the purposes of this tutorial, we show how to provision all resources via Terraform.
+For the purposes of this tutorial, we show how to provision all resources via Terraform.
If you do not want to use Terraform, you can also create the required resources via the [console](https://console.scaleway.com/), the [Scaleway API](https://www.scaleway.com/en/developers/api/), or any other [developer tool](https://www.scaleway.com/en/developers/). Remember that if you do so, you will need to set up environment variables for functions as previously specified. The following documentation may help create the required resources:
@@ -149,7 +149,7 @@ If you do not want to use Terraform, you can also create the required resources
1. Create a directory called `terraform` (at the same level as the `scraper` and `consumer` directories created in the previous steps).
2. Inside it, create a file called `main.tf`.
3. In the file you just created, add the code below to set up the [Scaleway Terraform provider](https://registry.terraform.io/providers/scaleway/scaleway/latest/docs) and your Project:
- ```
+ ```hcl
terraform {
required_providers {
scaleway = {
@@ -167,7 +167,7 @@ If you do not want to use Terraform, you can also create the required resources
}
```
4. Still in the same file, add the code below to provision the SQS resources: SQS activation for the project, separate credentials with appropriate permissions for producer and consumer, and an SQS queue:
- ```
+ ```hcl
resource "scaleway_mnq_sqs" "main" {
project_id = scaleway_account_project.mnq_tutorial.id
}
@@ -202,7 +202,7 @@ If you do not want to use Terraform, you can also create the required resources
}
```
5. Add the code below to provision the Managed Database for PostgreSQL resources. Note that here we are creating a random password and using it for the default and worker user:
- ```
+ ```hcl
resource "random_password" "dev_mnq_pg_exporter_password" {
length = 16
special = true
@@ -219,7 +219,7 @@ If you do not want to use Terraform, you can also create the required resources
node_type = "db-dev-s"
engine = "PostgreSQL-15"
is_ha_cluster = false
- disable_backup = true
+ disable_backup = true
user_name = "mnq_initial_user"
password = random_password.dev_mnq_pg_exporter_password.result
}
@@ -240,7 +240,7 @@ If you do not want to use Terraform, you can also create the required resources
}
resource "scaleway_rdb_database" "main" {
- instance_id = scaleway_rdb_instance.main.id
+ instance_id = scaleway_rdb_instance.main.id
name = "hn-database"
}
@@ -252,14 +252,14 @@ If you do not want to use Terraform, you can also create the required resources
}
resource "scaleway_rdb_privilege" "mnq_user_role" {
- instance_id = scaleway_rdb_instance.main.id
+ instance_id = scaleway_rdb_instance.main.id
user_name = scaleway_rdb_user.worker.name
database_name = scaleway_rdb_database.main.name
permission = "all"
}
```
6. Add the code below to provision the functions resources. First, activate the namespace, then locally zip the code and create the functions in the cloud. Note that we are referencing variables from other resources, to completely automate the deployment process:
- ```
+ ```hcl
locals {
scraper_folder_path = "../scraper"
consumer_folder_path = "../consumer"
@@ -354,17 +354,17 @@ If you do not want to use Terraform, you can also create the required resources
}
}
```
- Note that a folder `archives` needs to be created manually if you started from scratch. It is included in the git repository.
-7. Add the code below to provision the triggers resources. The cron trigger activates at the minutes `[0, 15, 30, 45]` of every hour. No arguments are passed, but we could do so by specifying them in JSON format in the `args` parameter.
- ```
+ Note that a folder `archives` needs to be created manually if you started from scratch. It is included in the git repository.
+7. Add the code below to provision the triggers resources. The cron trigger activates at the minutes `[0, 15, 30, 45]` of every hour. No arguments are passed, but we could do so by specifying them in JSON format in the `args` parameter.
+ ```hcl
resource "scaleway_function_cron" "scraper_cron" {
- function_id = scaleway_function.scraper.id
+ function_id = scaleway_function.scraper.id
schedule = "0,15,30,45 * * * *"
args = jsonencode({})
}
resource "scaleway_function_trigger" "consumer_sqs_trigger" {
- function_id = scaleway_function.consumer.id
+ function_id = scaleway_function.consumer.id
name = "hn-sqs-trigger"
sqs {
project_id = scaleway_mnq_sqs.main.project_id
@@ -378,7 +378,7 @@ Terraform makes this very straightforward. To provision all the resources and ge
```
cd terraform
terraform init
-terraform plan
+terraform plan
terraform apply
```
@@ -386,14 +386,14 @@ terraform apply
Go to the [Scaleway console](https://console.scaleway.com/), and check the logs and metrics for Serverless Functions' execution and Messaging and Queuing SQS queue statistics.
-To make sure the data is correctly stored in the database, you can [connect to it directly](/managed-databases/postgresql-and-mysql/how-to/connect-database-instance/) via a CLI tool such as `psql`.
+To make sure the data is correctly stored in the database, you can [connect to it directly](/managed-databases/postgresql-and-mysql/how-to/connect-database-instance/) via a CLI tool such as `psql`.
Retrieve the instance IP and port of your Managed Database from the console, under the [Managed Database section](https://console.scaleway.com/rdb/instances).
Use the following command to connect to your database. When prompted for a password, you can find it by running `terraform output -json`.
```
psql -h --port -d hn-database -U worker
```
-When you are done testing, don't forget to clean up! To do so, run:
+When you are done testing, don't forget to clean up! To do so, run:
```
cd terraform
terraform destroy
@@ -405,7 +405,7 @@ We have shown how to asynchronously decouple the producer and the consumer using
While the volume of data processed in this example is quite small, thanks to the Messaging and Queuing SQS queue's robustness and the auto-scaling capabilities of the Serverless Functions, you can adapt this example to manage larger workloads.
Here are some possible extensions to this basic example:
- - Replace the simple proposed logic with your own. What about counting how many times some keywords (e.g: copilot, serverless, microservice) appear in Hacker News articles?
+ - Replace the simple proposed logic with your own. What about counting how many times some keywords (e.g: copilot, serverless, microservice) appear in Hacker News articles?
- Define multiple cron triggers for different websites and pass the website as an argument to the function. Or, create multiple functions that feed the same queue.
- Use a [Serverless Container](/serverless/containers/quickstart/) instead of the consumer function, and use a command line tool such as `htmldoc` or `pandoc` to convert the scraped articles to PDF and upload the result to a [Scaleway Object Storage](/storage/object/quickstart/) S3 bucket.
- Replace the Managed Database for PostgreSQL with a [Scaleway Serverless Database](/serverless/sql-databases/quickstart/), so that all the infrastructure lives in the serverless ecosystem! *Note that at the moment there is no Terraform support for Serverless Database, hence the choice here to use Managed Database for PostgreSQL*.
\ No newline at end of file
diff --git a/tutorials/deploy-laravel-on-serverless-containers/index.mdx b/tutorials/deploy-laravel-on-serverless-containers/index.mdx
index ff3827cec5..9f1374f2d9 100644
--- a/tutorials/deploy-laravel-on-serverless-containers/index.mdx
+++ b/tutorials/deploy-laravel-on-serverless-containers/index.mdx
@@ -7,7 +7,7 @@ content:
paragraph: This tutorial provides a step-by-step guide for deploying a containerized Laravel application on the Scaleway cloud platform.
tags: laravel php docker nginx fpm
hero: assets/scaleway-umami.webp
-categories:
+categories:
- containers
- container-registry
dates:
@@ -42,7 +42,7 @@ Laravel applications make use of [queues](https://laravel.com/docs/10.x/queues)
2. Create a queue. In this example, we create a `Standard` queue (At-least-once delivery, the order of messages is not preserved) with the default parameters. This queue will be the default queue used by our application.
-
+
3. Generate credentials. In this example, we generate the credentials with `read` and `write` access.
@@ -53,7 +53,7 @@ In this section, we will focus on building the containerized image. With Docker,
1. Create the Dockerfile: we create a `Dockerfile` which is a text file that contains instructions for Docker to build the image. In this example, we specify the base image as `php:fpm-alpine`, install and enable the necessary php dependencies with [`install-php-extensions`](https://github.com/mlocati/docker-php-extension-installer), and determine the commands to be executed at startup.
- ```
+ ```dockerfile
# Dockerfile
FROM --platform=linux/amd64 php:8.2.6-fpm-alpine3.18
@@ -84,9 +84,9 @@ In this section, we will focus on building the containerized image. With Docker,
CMD ["/usr/bin/supervisord", "-c", "/etc/supervisor/conf.d/supervisord.conf"]
```
2. Create the supervisor configuration file. [Supervisor](http://supervisord.org/) is a reliable and efficient process control system for managing and monitoring processes. This is used as multiple processes are running within the container. In this example, we create a `stubs/supervisor/supervisord.conf` file with the following configuration to start the web server Nginx, the php-fpm pool, and 5 workers:
- ```
+ ```conf
# stubs/supervisor/supervisord.conf
- [supervisord]
+ [supervisord]
nodaemon=true
logfile=/dev/null
logfile_maxbytes=0
@@ -128,43 +128,43 @@ In this section, we will focus on building the containerized image. With Docker,
3. Create web server configuration files. Nginx will be used to serve the static assets and to forward the requests to the php-fpm pool for processing. In this example, we create the following configuration files `stubs/nginx/http.d/default.conf` and `stubs/nginx/nginx.conf`.
- ```
+ ```hcl
# stubs/nginx/http.d/default.conf
server {
listen 80;
listen [::]:80;
server_name _;
root /var/www/html/public;
-
+
add_header X-Frame-Options "SAMEORIGIN";
add_header X-Content-Type-Options "nosniff";
-
+
index index.php;
-
+
charset utf-8;
-
+
location / {
try_files $uri $uri/ /index.php?$query_string;
}
-
+
location = /favicon.ico { access_log off; log_not_found off; }
location = /robots.txt { access_log off; log_not_found off; }
-
+
error_page 404 /index.php;
-
+
location ~ \.php$ {
fastcgi_pass unix:/var/run/php/php8.2-fpm.sock;
fastcgi_param SCRIPT_FILENAME $realpath_root$fastcgi_script_name;
include fastcgi_params;
}
-
+
location ~ /\.(?!well-known).* {
deny all;
}
}
```
- ```
+ ```hcl
# stubs/nginx/nginx.conf
error_log /var/log/nginx/error.log notice;
events {
@@ -183,11 +183,11 @@ In this section, we will focus on building the containerized image. With Docker,
pid /var/run/nginx.pid;
user nginx;
worker_processes auto;
- ```
+ ```
4. Create the php-fpm configuration file. The configuration `stubs/php/php-fpm.d/zz-docker.conf` file should be created, and the php-fpm pool configured to render the dynamic pages of the Laravel application. Depending on the needs of your application, you might have to fine-tune the configuration of the process manager. Further information is available in the [php manual](https://www.php.net/manual/en/install.fpm.configuration.php).
-
- ```
+
+ ```conf
[global]
daemonize = no
@@ -197,27 +197,27 @@ In this section, we will focus on building the containerized image. With Docker,
listen.group = www-data
listen.mode = 0660
- pm = dynamic
- pm.max_children = 75
- pm.start_servers = 10
- pm.min_spare_servers = 5
- pm.max_spare_servers = 20
+ pm = dynamic
+ pm.max_children = 75
+ pm.start_servers = 10
+ pm.min_spare_servers = 5
+ pm.max_spare_servers = 20
pm.process_idle_timeout = 10s
```
5. Build the docker image.
- ```
+ ```sh
docker build -t my-image .
```
-## Creating Container Registry
+## Creating Container Registry
1. [Create a Scaleway Container Registry namespace](/containers/container-registry/how-to/create-namespace/) in the `PAR` region. Set the visibility to `Private` to avoid having your container retrieved without proper authentication and authorization.
2. Run the following command in your local terminal to log in to the newly created Container Registry.
- ```
+ ```sh
docker login rg.fr-par.scw.cloud/namespace-zen-feistel -u nologin --password-stdin <<< "$SCW_SECRET_KEY"
```
@@ -226,8 +226,8 @@ In this section, we will focus on building the containerized image. With Docker,
3. Tag the image and push it to the Container Registry namespace.
-
- ```
+
+ ```sh
docker tag my-image rg.fr-par.scw.cloud/namespace-zen-feistel/my-image:v1
docker push rg.fr-par.scw.cloud/namespace-zen-feistel/my-image:v1
```
@@ -237,7 +237,7 @@ In this section, we will focus on building the containerized image. With Docker,
The Scaleway documentation website provides a Quickstart on how to [create and manage a Serverless Container Namespace](/serverless/containers/quickstart/).
1. Create a Serverless Containers namespace. In this example, we create the `my-laravel-application` namespace and configure the environment variables and secrets necessary for our application. In particular, we must add all the variables needed to connect to the previously created SQS/SNS queue.
-
+
By default, Laravel expects the following environment variables/secrets to be filled in for queues: `AWS_ACCESS_KEY_ID`, `AWS_SECRET_ACCESS_KEY`, `AWS_DEFAULT_REGION`, `QUEUE_CONNECTION`, `SQS_PREFIX` and `SQS_QUEUE`.
2. Deploy the application. Click **+ Deploy a Container** once the namespace is created, and follow the instructions of the creation wizard. Select the registry namespace and the previously uploaded Docker image and configure the listening port (the Nginx web server is listening on port 80). For the CPU and memory, define at least 560mVPCU and 256 MB respectively. To reduce the limitations due to [cold start](/serverless/containers/concepts/#cold-start), we will run at least 1 instance.
@@ -274,7 +274,7 @@ By default, some metrics will be available in the Scaleway console. However, to
To test the load on the application, there is a basic test route that pushes a job into the queue and returns the welcome page.
-``` php
+```php
# routes/web.php
use App\Jobs\ProcessPodcast;
@@ -287,7 +287,7 @@ Route::get('/test', function () {
```
The job does nothing but wait for a couple of seconds.
-``` php
+```php
# app/Jobs/ProcessPodcast
class ProcessPodcast implements ShouldQueue
@@ -300,11 +300,11 @@ class ProcessPodcast implements ShouldQueue
```
Then, use `hey` to send 400 requests (20 concurrent requests) to this route.
-```
+```sh
hey -n 400 -q 20 https://example.com/test
```
-We can see that our deployment is not sufficiently sized to handle such workload and the response times are far from ideal.
+We can see that our deployment is not sufficiently sized to handle such workload and the response times are far from ideal.
```
Response time histogram:
diff --git a/tutorials/encode-videos-using-serverless-jobs/index.mdx b/tutorials/encode-videos-using-serverless-jobs/index.mdx
index 3f68a4d1b4..01051cdcb3 100644
--- a/tutorials/encode-videos-using-serverless-jobs/index.mdx
+++ b/tutorials/encode-videos-using-serverless-jobs/index.mdx
@@ -21,7 +21,7 @@ This tutorial demonstrates the process of encoding videos retrieved from Object
- A Scaleway account logged into the [console](https://console.scaleway.com)
- [Owner](/identity-and-access-management/iam/concepts/#owner) status or [IAM permissions](/identity-and-access-management/iam/concepts/#permission) allowing you to perform actions in the intended Organization
-- An [Object Storage bucket](/storage/object/how-to/create-a-bucket/)
+- An [Object Storage bucket](/storage/object/how-to/create-a-bucket/)
- A valid [API key](/identity-and-access-management/iam/how-to/create-api-keys/)
- Installed [Docker engine](https://docs.docker.com/engine/install/)
@@ -72,7 +72,7 @@ The initial step involves defining a Docker image for interacting with the S3 Ob
This Dockerfile uses `linuxserver/ffmpeg` as a base image bundled with FFMPEG along with a variety of encoding codecs and installs [MinIO](https://min.io/) as a command-line S3 client to copy files over Object Storage.
3. Build and [push the image](/containers/container-registry/how-to/push-images/) to your Container Registry:
- ```
+ ```bash
docker build . -t
docker push
```
@@ -125,7 +125,7 @@ Once the run status is **Succeeded**, the encoded video can be found in your S3
Your job can also be triggered through the [Scaleway API](https://www.scaleway.com/en/developers/api/serverless-jobs/#path-job-definitions-run-an-existing-job-definition-by-its-unique-identifier-this-will-create-a-new-job-run) using the same environment variables:
-```
+```bash
curl -X POST \
-H "X-Auth-Token: " \
-H "Content-Type: application/json" \
diff --git a/tutorials/large-messages/index.mdx b/tutorials/large-messages/index.mdx
index 4f851c6a4f..342ae7398b 100644
--- a/tutorials/large-messages/index.mdx
+++ b/tutorials/large-messages/index.mdx
@@ -4,7 +4,7 @@ meta:
description: Learn how to build a serverless architecture for handling large messages with Scaleway's NATS, Serverless Functions, and Object Storage. Follow our step-by-step Terraform-based tutorial for asynchronous file conversion using messaging, functions, and triggers.
content:
h1: Create a serverless architecture for handling large messages using Scaleway's NATS, Serverless Functions, and Object Storage.
- paragraph: Learn how to build a serverless architecture for handling large messages with Scaleway's NATS, Serverless Functions, and Object Storage. Follow our step-by-step Terraform-based tutorial for asynchronous file conversion using messaging, functions, and triggers.
+ paragraph: Learn how to build a serverless architecture for handling large messages with Scaleway's NATS, Serverless Functions, and Object Storage. Follow our step-by-step Terraform-based tutorial for asynchronous file conversion using messaging, functions, and triggers.
categories:
- messaging
- functions
@@ -52,7 +52,7 @@ Three essential services are required to ensure everything is working together:
Remember that you can refer to the [code repository](https://github.com/rouche-q/serverless-examples/tree/main/projects/large-messages/README.md) to check all code files.
- ```terraform
+ ```hcl
terraform {
required_providers {
scaleway = {
@@ -74,7 +74,7 @@ Three essential services are required to ensure everything is working together:
The Scaleway provider is needed, but also three providers from HashiCorp that we will use later in the tutorial.
2. Include two variables to enable the secure passage of your Scaleway credentials. Then initialize the Scaleway provider in the `fr-par-1` region.
- ```terraform
+ ```hcl
variable "scw_access_key_id" {
type = string
sensitive = true
@@ -97,7 +97,7 @@ Three essential services are required to ensure everything is working together:
```
4. Continuing in the `main.tf` file, add the following Terraform code to create an Object Storage bucket that will be used for storing your images.
- ```terraform
+ ```hcl
resource "random_id" "bucket" {
byte_length = 8
}
@@ -119,7 +119,7 @@ Three essential services are required to ensure everything is working together:
In this code, the resource `random_id.bucket` generates a random ID, which is then passed to the object bucket to ensure its uniqueness. Additionally, a `scaleway_object_bucket_acl` ACL is applied to the bucket, setting it to private and outputting the bucket name for use in your producer.
5. Add these resources to create a NATS account and your NATS credentials file:
- ```terraform
+ ```hcl
resource "scaleway_mnq_nats_account" "large_messages" {
name = "nats-acc-large-messages"
}
@@ -162,7 +162,7 @@ As mentioned earlier, the producer will be implemented as a straightforward shel
Our script takes the file path that we want to upload as the first parameter.
To upload the file, we will use the AWS CLI configured with the Scaleway endpoint and credentials because Scaleway Object storage is fully compliant with S3.
-
+
3. Pass the path to the AWS CLI command as follows:
```bash
aws s3 cp $1 s3://$SCW_BUCKET
@@ -211,7 +211,7 @@ We continue using the Scaleway ecosystem and deploy the consumer using a Serverl
```
5. Before proceeding with the function's logic, improve the Terraform code by adding the following code to your `main.tf` file:
- ```terraform
+ ```hcl
resource "null_resource" "install_dependencies" {
provisioner "local-exec" {
command = <<-EOT
@@ -240,7 +240,7 @@ We continue using the Scaleway ecosystem and deploy the consumer using a Serverl
The `null_resource` is used to download and package the correct versions of the libraries that we use with the function. Learn more about this in the [Scaleway documentation.](/serverless/functions/how-to/package-function-dependencies-in-zip/#specific-libraries-(with-needs-for-specific-c-compiled-code))
6. Create the function namespace.
- ```terraform
+ ```hcl
resource "scaleway_function_namespace" "large_messages" {
name = "large-messages-function"
description = "Large messages namespace"
@@ -248,7 +248,7 @@ We continue using the Scaleway ecosystem and deploy the consumer using a Serverl
```
7. Add the resource to set up the function.
- ```terraform
+ ```hcl
resource "scaleway_function" "large_messages" {
namespace_id = scaleway_function_namespace.large_messages.id
runtime = "python311"
@@ -275,15 +275,15 @@ We continue using the Scaleway ecosystem and deploy the consumer using a Serverl
Essential environment variables and secrets to use in our function logic are also added.
8. Create the function trigger to "wake up" the function when a NATS message comes in.
- ```terraform
+ ```hcl
resource "scaleway_function_trigger" "large_messages" {
function_id = scaleway_function.large_messages.id
name = "large-messages-trigger"
nats {
account_id = scaleway_mnq_nats_account.large_messages.id
subject = "large-messages"
- }
- }
+ }
+ }
```
It defines which account ID and subject to observe for getting messages.
@@ -364,6 +364,6 @@ terraform apply
## Conclusion, going further
In this introductory tutorial, we have demonstrated the usage of the NATS server for Messaging and Queuing, along with other services from the Scaleway ecosystem, to facilitate the transfer of large messages surpassing the typical size constraints. There are possibilities to expand upon this tutorial for various use cases, such as:
-
+
- Extending the conversion capabilities to handle different document types like `docx`.
- Sending URLs directly to NATS and converting HTML content to PDF.
\ No newline at end of file
diff --git a/tutorials/manage-instances-with-terraform-and-functions/index.mdx b/tutorials/manage-instances-with-terraform-and-functions/index.mdx
index cb04ce6113..2b68b0a2e4 100644
--- a/tutorials/manage-instances-with-terraform-and-functions/index.mdx
+++ b/tutorials/manage-instances-with-terraform-and-functions/index.mdx
@@ -57,7 +57,7 @@ This tutorial will simulate a project with a production environment running all
-- variables.tf
```
4. Edit the `backend.tf` file to enable remote configuration backup:
- ```json
+ ```hcl
terraform {
backend "s3" {
bucket = "XXXXXXXXX"
@@ -78,7 +78,7 @@ This tutorial will simulate a project with a production environment running all
*/
```
5. Edit the `provider.tf` file and add Scaleway as a provider:
- ```json
+ ```hcl
terraform {
required_providers {
scaleway = {
@@ -91,7 +91,7 @@ This tutorial will simulate a project with a production environment running all
```
6. Specify the following variables in the `variables.tf` file:
- ```json
+ ```hcl
variable "zone" {
type = string
}
@@ -115,7 +115,7 @@ This tutorial will simulate a project with a production environment running all
}
```
7. Add the variable values to `terraform.tfvars`:
- ```bash
+ ```hcl
zone = "fr-par-1"
region = "fr-par"
env = "dev"
@@ -170,7 +170,7 @@ def handle(event, context):
## Configuring your infrastructure
1. Edit the file `main.tf` to add a production Instance using a GP1-S named "Prod":
- ```json
+ ```hcl
## Configuring Producion environment
resource "scaleway_instance_ip" "public_ip-prod" {
project_id = var.project_id
@@ -193,7 +193,7 @@ def handle(event, context):
}
```
2. Add a development Instance using a DEV1-L named "Dev":
- ```json
+ ```hcl
## Configuring Development environment that will be automatically turn off on week-ends and turn on monday mornings
resource "scaleway_instance_ip" "public_ip-dev" {
project_id = var.project_id
@@ -215,7 +215,7 @@ def handle(event, context):
}
```
3. Write a function that will run the code you have just written:
- ```json
+ ```hcl
# Creating function code archive that will then be updated
data "archive_file" "source_zip" {
type = "zip"
@@ -247,7 +247,7 @@ def handle(event, context):
}
```
4. Add a cronjob attached to the function to turn your function off every Friday evening:
- ```json
+ ```hcl
# Adding a first cron to turn off the Instance every friday evening (11:30 pm)
resource "scaleway_function_cron" "turn-off" {
function_id = scaleway_function.main.id
@@ -261,7 +261,7 @@ def handle(event, context):
}
```
5. Create a cronjob attached to the function to turn your function on every Monday morning:
- ```json
+ ```hcl
# Adding a second cron to turn on the Instance every monday morning (7:00 am)
resource "scaleway_function_cron" "turn-on" {
function_id = scaleway_function.main.id
diff --git a/tutorials/snapshot-instances-jobs/index.mdx b/tutorials/snapshot-instances-jobs/index.mdx
index 99c358473f..7b04845764 100644
--- a/tutorials/snapshot-instances-jobs/index.mdx
+++ b/tutorials/snapshot-instances-jobs/index.mdx
@@ -169,7 +169,7 @@ Serverless Jobs rely on containers to run in the cloud, and therefore require a
1. Create a `Dockerfile`, and add the following code to it:
- ```docker
+ ```dockerfile
# Using apline/golang image
FROM golang:1.22-alpine
diff --git a/tutorials/strapi-app-serverless-containers-sqldb/index.mdx b/tutorials/strapi-app-serverless-containers-sqldb/index.mdx
index c416876cb1..9d5c2e76e8 100644
--- a/tutorials/strapi-app-serverless-containers-sqldb/index.mdx
+++ b/tutorials/strapi-app-serverless-containers-sqldb/index.mdx
@@ -42,11 +42,11 @@ You can either deploy your application:
2. Run the command below to make sure the environment variables are properly set:
```sh
- scw info
+ scw info
```
This command displays your access key and secret key in the last two lines of the output. The `ORIGIN` column should display `env (SCW_ACCESS_KEY)` and `env (SCW_SECRET_KEY)`, and not `default profile`.
-
+
```bash
KEY VALUE ORIGIN
(...)
@@ -77,16 +77,16 @@ You can either deploy your application:
&& psql -h $DATABASE_HOST -p $DATABASE_PORT \
-d $DATABASE_NAME -U $DATABASE_USERNAME
```
- An input field with the name of your database should display:
+ An input field with the name of your database should display:
```
psql (15.3, server 16.1 (Debian 16.1-1.pgdg120+1))
SSL connection (protocol: TLSv1.3, cipher: TLS_AES_128_GCM_SHA256, compression: off)
- Type "help" for help.
+ Type "help" for help.
tutorial-strapi-blog-db=>
```
-
+
### Running Strapi locally
1. Create a Strapi blog template
@@ -95,7 +95,7 @@ You can either deploy your application:
--dbclient=postgres --dbhost=$DATABASE_HOST \
--dbport=$DATABASE_PORT --dbname=$DATABASE_NAME \
--dbusername=$DATABASE_USERNAME \
- --dbpassword=$DATABASE_PASSWORD --dbssl=true
+ --dbpassword=$DATABASE_PASSWORD --dbssl=true
```
2. Access the folder you just created:
@@ -120,7 +120,7 @@ You can either deploy your application:
touch Dockerfile
```
2. Add the code below to your file, save it, and exit.
- ```bash
+ ```docker
# Creating a multi-stage build for production
FROM node:20-alpine as build
RUN apk update && apk add --no-cache build-base gcc autoconf automake zlib-dev libpng-dev vips-dev git > /dev/null 2>&1
@@ -189,12 +189,12 @@ You can either deploy your application:
├── jsconfig.json
├── package.json
├── README.md
- └── yarn.lock
+ └── yarn.lock
```
5. Build your application container:
```bash
- docker build -t my-strapi-blog .
+ docker build -t my-strapi-blog .
```
The docker build image process can take a few minutes, particularly during the `npm install` step, since Strapi requires around 1 GB of node modules to be built.
@@ -281,7 +281,7 @@ You can either deploy your application:
```
When the status appears as `ready`, you can access the Strapi Administration Panel via your browser.
-
+
3. Copy the endpoint URL displayed next to the `DomainName` property, and paste it into your browser. The main Strapi page displays. Click "Open the administration" or add `/admin` to your browser URL to access the Strapi Administration Panel.
4. (Optional) You can check that Strapi APIs are working with the following command, or by accessing `https://{container_url}/api/articles` in your browser:
@@ -319,7 +319,7 @@ However, your Strapi container currently connects to your database with your [us
To secure your deployment, we will now add a dedicated [IAM application](/identity-and-access-management/iam/concepts/#application), give it the minimum required permissions, and provide its credentials to your Strapi container.
-1. Run the following command to create an [IAM application](/identity-and-access-management/iam/concepts/#application) and export it as a variable:
+1. Run the following command to create an [IAM application](/identity-and-access-management/iam/concepts/#application) and export it as a variable:
```bash
export SCW_APPLICATION_ID=$(scw iam application create name=tutorial-strapi-blog -o json | jq -r '.id')
```
@@ -364,10 +364,10 @@ To secure your deployment, we will now add a dedicated [IAM application](/identi
secret-environment-variables.5.value=$JWT_SECRET redeploy=true
```
-6. Refresh your browser page displaying the Strapi Administration Panel. An updated version displays.
+6. Refresh your browser page displaying the Strapi Administration Panel. An updated version displays.
You have now deployed a full serverless Strapi blog example!
-
+
## Going further with containers
- Inspect your newly created resources in the Scaleway console:
@@ -399,11 +399,11 @@ To secure your deployment, we will now add a dedicated [IAM application](/identi
2. Run the command below to make sure the environment variables are properly set:
```sh
- scw info
+ scw info
```
This command displays your access_key and secret_key in the two last lines of the output. The `ORIGIN` column should display `env (SCW_ACCESS_KEY)` and `env (SCW_SECRET_KEY)`, and not `default profile`.
-
+
```bash
KEY VALUE ORIGIN
(...)
@@ -512,12 +512,12 @@ To secure your deployment, we will now add a dedicated [IAM application](/identi
├── jsconfig.json
├── package.json
├── README.md
- └── yarn.lock
+ └── yarn.lock
```
8. Build your application container:
```bash
- docker build -t my-strapi-blog .
+ docker build -t my-strapi-blog .
```
The docker build image process can take a few minutes, particularly during the `npm install` step since Strapi requires around 1 GB of node modules to be built.
@@ -545,7 +545,7 @@ To secure your deployment, we will now add a dedicated [IAM application](/identi
docker push $REGISTRY_ENDPOINT/my-strapi-blog:latest
```
-
+
### Creating the Terraform configuration
1. Run the following command to create a new folder to store your Terraform files, and access it:
@@ -553,7 +553,7 @@ To secure your deployment, we will now add a dedicated [IAM application](/identi
cd ..
mkdir terraform-strapi-blog &&
cd terraform-strapi-blog
- ```
+ ```
2. Create an empty `main.tf` Terraform file inside the folder.
@@ -565,7 +565,7 @@ To secure your deployment, we will now add a dedicated [IAM application](/identi
```
3. Add the following code to your `main.tf` file:
- ```json
+ ```hcl
terraform {
required_providers {
scaleway = {
@@ -577,12 +577,12 @@ To secure your deployment, we will now add a dedicated [IAM application](/identi
}
required_version = ">= 0.13"
}
-
+
variable "REGISTRY_ENDPOINT" {
type = string
description = "Container Registry endpoint where your application container is stored"
}
-
+
variable "DEFAULT_PROJECT_ID" {
type = string
description = "Project ID where your resources will be created"
@@ -606,12 +606,12 @@ To secure your deployment, we will now add a dedicated [IAM application](/identi
for_each = toset(local.secrets)
length = 16
}
-
+
resource scaleway_container_namespace main {
name = "tutorial-strapi-blog-tf"
description = "Namespace created for full serverless Strapi blog deployment"
}
-
+
resource scaleway_container main {
name = "tutorial-strapi-blog-tf"
@@ -628,7 +628,7 @@ To secure your deployment, we will now add a dedicated [IAM application](/identi
privacy = "public"
protocol = "http1"
deploy = true
-
+
environment_variables = {
"DATABASE_CLIENT"="postgres",
"DATABASE_USERNAME" = scaleway_iam_application.app.id,
@@ -648,11 +648,11 @@ To secure your deployment, we will now add a dedicated [IAM application](/identi
"JWT_SECRET" = random_bytes.generated_secrets["jwt_secret"].base64
}
}
-
+
resource scaleway_iam_application "app" {
name = "tutorial-strapi-blog-tf"
}
-
+
resource scaleway_iam_policy "db_access" {
name = "tutorial-strapi-policy-tf"
description = "Gives tutorial Strapi blog access to Serverless SQL Database"
@@ -662,17 +662,17 @@ To secure your deployment, we will now add a dedicated [IAM application](/identi
permission_set_names = ["ServerlessSQLDatabaseReadWrite"]
}
}
-
+
resource scaleway_iam_api_key "api_key" {
application_id = scaleway_iam_application.app.id
}
-
+
resource scaleway_sdb_sql_database "database" {
name = "tutorial-strapi-tf"
min_cpu = 0
max_cpu = 8
}
-
+
output "database_connection_string" {
// Output as an example, you can give this string to your application
value = format("postgres://%s:%s@%s",
@@ -682,7 +682,7 @@ To secure your deployment, we will now add a dedicated [IAM application](/identi
)
sensitive = true
}
-
+
output "container_url" {
// Output as an example, you can give this string to your application
value = scaleway_container.main.domain_name
@@ -718,7 +718,7 @@ The Terraform file creates several resources:
```
Edit the `ADMIN_EMAIL` and `ADMIN_PASSWORD` values with your own email and password. Optionally, you can also edit `ADMIN_FIRSTNAME` and `ADMIN_LASTNAME` values to change the default admin first and last name.
- Strapi admin password requires at least 8 characters including one uppercase, one lowercase, one number, and one special character.
+ Strapi admin password requires at least 8 characters including one uppercase, one lowercase, one number, and one special character.
If the admin password or email does not meet the requirements, the container will not start.
@@ -813,7 +813,7 @@ Once you are done, run the following command to stop all your resources:
- **Fine-tune deployment options** such as autoscaling, targeted regions, and more. You can find more information by typing `scw container deploy --help` in your terminal, or by referring to the [dedicated documentation](/serverless/containers/how-to/manage-a-container/)
- Create a secondary production environment by duplicating your built container, building it in `NODE_ENV=production` environment, running `npm run start`, and plugging it onto another **Serverless SQL Database**. For instance, this will allow you to edit content-types which is not possible in production.
-
+
## Troubleshooting
If you happen to encounter any issues, first check that you meet all the requirements.
@@ -827,7 +827,7 @@ If you happen to encounter any issues, first check that you meet all the require
UpdatedAt 1 year ago
Description -
```
-
+
You can also find and compare your Project and Organization ID in the [Scaleway console settings](https://console.scaleway.com/project/settings).
- You have **Docker Engine** installed. Running the `docker -v` command in a terminal should display your currently installed docker version: