Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion serverless/containers/how-to/secure-a-container.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -57,7 +57,7 @@ secret:

Add the following [resource description](https://registry.terraform.io/providers/scaleway/scaleway/latest/docs/resources/container) in Terraform:

```
```hcl
secret_environment_variables = { "key" = "secret" }
```

Expand Down
46 changes: 23 additions & 23 deletions tutorials/create-serverless-scraping/index.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -47,7 +47,7 @@ We start by creating the scraper program, or the "data producer".

SQS credentials and queue URL are read by the function from environment variables. Those variables are set by Terraform as explained in [one of the next sections](#create-a-terraform-file-to-provision-the-necessary-scaleway-resources). *If you choose another deployment method, such as the [console](https://console.scaleway.com/), do not forget to set them.*
```python
queue_url = os.getenv('QUEUE_URL')
queue_url = os.getenv('QUEUE_URL')
sqs_access_key = os.getenv('SQS_ACCESS_KEY')
sqs_secret_access_key = os.getenv('SQS_SECRET_ACCESS_KEY')
```
Expand All @@ -65,10 +65,10 @@ We start by creating the scraper program, or the "data producer".
Using the AWS python sdk `boto3`, connect to the SQS queue and push the `title` and `url` of articles published less than 15 minutes ago.
```python
sqs = boto3.client(
'sqs',
endpoint_url=SCW_SQS_URL,
aws_access_key_id=sqs_access_key,
aws_secret_access_key=sqs_secret_access_key,
'sqs',
endpoint_url=SCW_SQS_URL,
aws_access_key_id=sqs_access_key,
aws_secret_access_key=sqs_secret_access_key,
region_name='fr-par')

for age, titleline in zip(ages, titlelines):
Expand Down Expand Up @@ -117,7 +117,7 @@ Next, let's create our consumer function. When receiving a message containing th
Lastly, we write the information into the database. *To keep the whole process completely automatic the* `CREATE_TABLE_IF_NOT_EXISTS` *query is run each time. If you integrate the functions into an existing database, there is no need for it.*
```python
conn = None
try:
try:
conn = pg8000.native.Connection(host=db_host, database=db_name, port=db_port, user=db_user, password=db_password, timeout=15)

conn.run(CREATE_TABLE_IF_NOT_EXISTS)
Expand All @@ -136,7 +136,7 @@ As explained in the [Scaleway Functions documentation](/serverless/functions/how

## Create a Terraform file to provision the necessary Scaleway resources

For the purposes of this tutorial, we show how to provision all resources via Terraform.
For the purposes of this tutorial, we show how to provision all resources via Terraform.

<Message type="tip">
If you do not want to use Terraform, you can also create the required resources via the [console](https://console.scaleway.com/), the [Scaleway API](https://www.scaleway.com/en/developers/api/), or any other [developer tool](https://www.scaleway.com/en/developers/). Remember that if you do so, you will need to set up environment variables for functions as previously specified. The following documentation may help create the required resources:
Expand All @@ -149,7 +149,7 @@ If you do not want to use Terraform, you can also create the required resources
1. Create a directory called `terraform` (at the same level as the `scraper` and `consumer` directories created in the previous steps).
2. Inside it, create a file called `main.tf`.
3. In the file you just created, add the code below to set up the [Scaleway Terraform provider](https://registry.terraform.io/providers/scaleway/scaleway/latest/docs) and your Project:
```
```hcl
terraform {
required_providers {
scaleway = {
Expand All @@ -167,7 +167,7 @@ If you do not want to use Terraform, you can also create the required resources
}
```
4. Still in the same file, add the code below to provision the SQS resources: SQS activation for the project, separate credentials with appropriate permissions for producer and consumer, and an SQS queue:
```
```hcl
resource "scaleway_mnq_sqs" "main" {
project_id = scaleway_account_project.mnq_tutorial.id
}
Expand Down Expand Up @@ -202,7 +202,7 @@ If you do not want to use Terraform, you can also create the required resources
}
```
5. Add the code below to provision the Managed Database for PostgreSQL resources. Note that here we are creating a random password and using it for the default and worker user:
```
```hcl
resource "random_password" "dev_mnq_pg_exporter_password" {
length = 16
special = true
Expand All @@ -219,7 +219,7 @@ If you do not want to use Terraform, you can also create the required resources
node_type = "db-dev-s"
engine = "PostgreSQL-15"
is_ha_cluster = false
disable_backup = true
disable_backup = true
user_name = "mnq_initial_user"
password = random_password.dev_mnq_pg_exporter_password.result
}
Expand All @@ -240,7 +240,7 @@ If you do not want to use Terraform, you can also create the required resources
}

resource "scaleway_rdb_database" "main" {
instance_id = scaleway_rdb_instance.main.id
instance_id = scaleway_rdb_instance.main.id
name = "hn-database"
}

Expand All @@ -252,14 +252,14 @@ If you do not want to use Terraform, you can also create the required resources
}

resource "scaleway_rdb_privilege" "mnq_user_role" {
instance_id = scaleway_rdb_instance.main.id
instance_id = scaleway_rdb_instance.main.id
user_name = scaleway_rdb_user.worker.name
database_name = scaleway_rdb_database.main.name
permission = "all"
}
```
6. Add the code below to provision the functions resources. First, activate the namespace, then locally zip the code and create the functions in the cloud. Note that we are referencing variables from other resources, to completely automate the deployment process:
```
```hcl
locals {
scraper_folder_path = "../scraper"
consumer_folder_path = "../consumer"
Expand Down Expand Up @@ -354,17 +354,17 @@ If you do not want to use Terraform, you can also create the required resources
}
}
```
Note that a folder `archives` needs to be created manually if you started from scratch. It is included in the git repository.
7. Add the code below to provision the triggers resources. The cron trigger activates at the minutes `[0, 15, 30, 45]` of every hour. No arguments are passed, but we could do so by specifying them in JSON format in the `args` parameter.
```
Note that a folder `archives` needs to be created manually if you started from scratch. It is included in the git repository.
7. Add the code below to provision the triggers resources. The cron trigger activates at the minutes `[0, 15, 30, 45]` of every hour. No arguments are passed, but we could do so by specifying them in JSON format in the `args` parameter.
```hcl
resource "scaleway_function_cron" "scraper_cron" {
function_id = scaleway_function.scraper.id
function_id = scaleway_function.scraper.id
schedule = "0,15,30,45 * * * *"
args = jsonencode({})
}

resource "scaleway_function_trigger" "consumer_sqs_trigger" {
function_id = scaleway_function.consumer.id
function_id = scaleway_function.consumer.id
name = "hn-sqs-trigger"
sqs {
project_id = scaleway_mnq_sqs.main.project_id
Expand All @@ -378,22 +378,22 @@ Terraform makes this very straightforward. To provision all the resources and ge
```
cd terraform
terraform init
terraform plan
terraform plan
terraform apply
```

### How to check that everything is working correctly

Go to the [Scaleway console](https://console.scaleway.com/), and check the logs and metrics for Serverless Functions' execution and Messaging and Queuing SQS queue statistics.

To make sure the data is correctly stored in the database, you can [connect to it directly](/managed-databases/postgresql-and-mysql/how-to/connect-database-instance/) via a CLI tool such as `psql`.
To make sure the data is correctly stored in the database, you can [connect to it directly](/managed-databases/postgresql-and-mysql/how-to/connect-database-instance/) via a CLI tool such as `psql`.
Retrieve the instance IP and port of your Managed Database from the console, under the [Managed Database section](https://console.scaleway.com/rdb/instances).
Use the following command to connect to your database. When prompted for a password, you can find it by running `terraform output -json`.
```
psql -h <DB_INSTANCE_IP> --port <DB_INSTANCE_PORT> -d hn-database -U worker
```

When you are done testing, don't forget to clean up! To do so, run:
When you are done testing, don't forget to clean up! To do so, run:
```
cd terraform
terraform destroy
Expand All @@ -405,7 +405,7 @@ We have shown how to asynchronously decouple the producer and the consumer using
While the volume of data processed in this example is quite small, thanks to the Messaging and Queuing SQS queue's robustness and the auto-scaling capabilities of the Serverless Functions, you can adapt this example to manage larger workloads.

Here are some possible extensions to this basic example:
- Replace the simple proposed logic with your own. What about counting how many times some keywords (e.g: copilot, serverless, microservice) appear in Hacker News articles?
- Replace the simple proposed logic with your own. What about counting how many times some keywords (e.g: copilot, serverless, microservice) appear in Hacker News articles?
- Define multiple cron triggers for different websites and pass the website as an argument to the function. Or, create multiple functions that feed the same queue.
- Use a [Serverless Container](/serverless/containers/quickstart/) instead of the consumer function, and use a command line tool such as `htmldoc` or `pandoc` to convert the scraped articles to PDF and upload the result to a [Scaleway Object Storage](/storage/object/quickstart/) S3 bucket.
- Replace the Managed Database for PostgreSQL with a [Scaleway Serverless Database](/serverless/sql-databases/quickstart/), so that all the infrastructure lives in the serverless ecosystem! *Note that at the moment there is no Terraform support for Serverless Database, hence the choice here to use Managed Database for PostgreSQL*.
66 changes: 33 additions & 33 deletions tutorials/deploy-laravel-on-serverless-containers/index.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,7 @@ content:
paragraph: This tutorial provides a step-by-step guide for deploying a containerized Laravel application on the Scaleway cloud platform.
tags: laravel php docker nginx fpm
hero: assets/scaleway-umami.webp
categories:
categories:
- containers
- container-registry
dates:
Expand Down Expand Up @@ -42,7 +42,7 @@ Laravel applications make use of [queues](https://laravel.com/docs/10.x/queues)
2. Create a queue. In this example, we create a `Standard` queue (At-least-once delivery, the order of messages is not preserved) with the default parameters. This queue will be the default queue used by our application.

<Lightbox src="scaleway-serverless-messaging-queue.webp" alt="Scaleway Console Messaging and Queuing Queue" />

3. Generate credentials. In this example, we generate the credentials with `read` and `write` access.

<Lightbox src="scaleway-serverless-messaging-credential.webp" alt="Scaleway Console Messaging and Queuing Credential" />
Expand All @@ -53,7 +53,7 @@ In this section, we will focus on building the containerized image. With Docker,

1. Create the Dockerfile: we create a `Dockerfile` which is a text file that contains instructions for Docker to build the image. In this example, we specify the base image as `php:fpm-alpine`, install and enable the necessary php dependencies with [`install-php-extensions`](https://github.com/mlocati/docker-php-extension-installer), and determine the commands to be executed at startup.

```
```dockerfile
# Dockerfile
FROM --platform=linux/amd64 php:8.2.6-fpm-alpine3.18

Expand Down Expand Up @@ -84,9 +84,9 @@ In this section, we will focus on building the containerized image. With Docker,
CMD ["/usr/bin/supervisord", "-c", "/etc/supervisor/conf.d/supervisord.conf"]
```
2. Create the supervisor configuration file. [Supervisor](http://supervisord.org/) is a reliable and efficient process control system for managing and monitoring processes. This is used as multiple processes are running within the container. In this example, we create a `stubs/supervisor/supervisord.conf` file with the following configuration to start the web server Nginx, the php-fpm pool, and 5 workers:
```
```conf
# stubs/supervisor/supervisord.conf
[supervisord]
[supervisord]
nodaemon=true
logfile=/dev/null
logfile_maxbytes=0
Expand Down Expand Up @@ -128,43 +128,43 @@ In this section, we will focus on building the containerized image. With Docker,

3. Create web server configuration files. Nginx will be used to serve the static assets and to forward the requests to the php-fpm pool for processing. In this example, we create the following configuration files `stubs/nginx/http.d/default.conf` and `stubs/nginx/nginx.conf`.

```
```hcl
# stubs/nginx/http.d/default.conf
server {
listen 80;
listen [::]:80;
server_name _;
root /var/www/html/public;

add_header X-Frame-Options "SAMEORIGIN";
add_header X-Content-Type-Options "nosniff";

index index.php;

charset utf-8;

location / {
try_files $uri $uri/ /index.php?$query_string;
}

location = /favicon.ico { access_log off; log_not_found off; }
location = /robots.txt { access_log off; log_not_found off; }

error_page 404 /index.php;

location ~ \.php$ {
fastcgi_pass unix:/var/run/php/php8.2-fpm.sock;
fastcgi_param SCRIPT_FILENAME $realpath_root$fastcgi_script_name;
include fastcgi_params;
}

location ~ /\.(?!well-known).* {
deny all;
}
}
```

```
```hcl
# stubs/nginx/nginx.conf
error_log /var/log/nginx/error.log notice;
events {
Expand All @@ -183,11 +183,11 @@ In this section, we will focus on building the containerized image. With Docker,
pid /var/run/nginx.pid;
user nginx;
worker_processes auto;
```
```

4. Create the php-fpm configuration file. The configuration `stubs/php/php-fpm.d/zz-docker.conf` file should be created, and the php-fpm pool configured to render the dynamic pages of the Laravel application. Depending on the needs of your application, you might have to fine-tune the configuration of the process manager. Further information is available in the [php manual](https://www.php.net/manual/en/install.fpm.configuration.php).
```

```conf
[global]
daemonize = no

Expand All @@ -197,27 +197,27 @@ In this section, we will focus on building the containerized image. With Docker,
listen.group = www-data
listen.mode = 0660

pm = dynamic
pm.max_children = 75
pm.start_servers = 10
pm.min_spare_servers = 5
pm.max_spare_servers = 20
pm = dynamic
pm.max_children = 75
pm.start_servers = 10
pm.min_spare_servers = 5
pm.max_spare_servers = 20
pm.process_idle_timeout = 10s
```

5. Build the docker image.
```
```sh
docker build -t my-image .
```

## Creating Container Registry
## Creating Container Registry

1. [Create a Scaleway Container Registry namespace](/containers/container-registry/how-to/create-namespace/) in the `PAR` region. Set the visibility to `Private` to avoid having your container retrieved without proper authentication and authorization.

<Lightbox src="scaleway-serverless-containers-namespace.webp" alt="Scaleway Console Container Registry Namespace" />

2. Run the following command in your local terminal to log in to the newly created Container Registry.
```
```sh
docker login rg.fr-par.scw.cloud/namespace-zen-feistel -u nologin --password-stdin <<< "$SCW_SECRET_KEY"
```

Expand All @@ -226,8 +226,8 @@ In this section, we will focus on building the containerized image. With Docker,
</Message>

3. Tag the image and push it to the Container Registry namespace.
```

```sh
docker tag my-image rg.fr-par.scw.cloud/namespace-zen-feistel/my-image:v1
docker push rg.fr-par.scw.cloud/namespace-zen-feistel/my-image:v1
```
Expand All @@ -237,7 +237,7 @@ In this section, we will focus on building the containerized image. With Docker,
The Scaleway documentation website provides a Quickstart on how to [create and manage a Serverless Container Namespace](/serverless/containers/quickstart/).

1. Create a Serverless Containers namespace. In this example, we create the `my-laravel-application` namespace and configure the environment variables and secrets necessary for our application. In particular, we must add all the variables needed to connect to the previously created SQS/SNS queue.

By default, Laravel expects the following environment variables/secrets to be filled in for queues: `AWS_ACCESS_KEY_ID`, `AWS_SECRET_ACCESS_KEY`, `AWS_DEFAULT_REGION`, `QUEUE_CONNECTION`, `SQS_PREFIX` and `SQS_QUEUE`.

2. Deploy the application. Click **+ Deploy a Container** once the namespace is created, and follow the instructions of the creation wizard. Select the registry namespace and the previously uploaded Docker image and configure the listening port (the Nginx web server is listening on port 80). For the CPU and memory, define at least 560mVPCU and 256 MB respectively. To reduce the limitations due to [cold start](/serverless/containers/concepts/#cold-start), we will run at least 1 instance.
Expand Down Expand Up @@ -274,7 +274,7 @@ By default, some metrics will be available in the Scaleway console. However, to

To test the load on the application, there is a basic test route that pushes a job into the queue and returns the welcome page.

``` php
```php
# routes/web.php
use App\Jobs\ProcessPodcast;

Expand All @@ -287,7 +287,7 @@ Route::get('/test', function () {
```
The job does nothing but wait for a couple of seconds.

``` php
```php
# app/Jobs/ProcessPodcast

class ProcessPodcast implements ShouldQueue
Expand All @@ -300,11 +300,11 @@ class ProcessPodcast implements ShouldQueue
```
Then, use `hey` to send 400 requests (20 concurrent requests) to this route.

```
```sh
hey -n 400 -q 20 https://example.com/test
```

We can see that our deployment is not sufficiently sized to handle such workload and the response times are far from ideal.
We can see that our deployment is not sufficiently sized to handle such workload and the response times are far from ideal.

```
Response time histogram:
Expand Down
Loading