You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: tutorials/create-serverless-scraping/index.mdx
+23-23Lines changed: 23 additions & 23 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -47,7 +47,7 @@ We start by creating the scraper program, or the "data producer".
47
47
48
48
SQS credentials and queue URL are read by the function from environment variables. Those variables are set by Terraform as explained in [one of the next sections](#create-a-terraform-file-to-provision-the-necessary-scaleway-resources). *If you choose another deployment method, such as the [console](https://console.scaleway.com/), do not forget to set them.*
@@ -65,10 +65,10 @@ We start by creating the scraper program, or the "data producer".
65
65
Using the AWS python sdk `boto3`, connect to the SQS queue and push the `title` and `url` of articles published less than 15 minutes ago.
66
66
```python
67
67
sqs = boto3.client(
68
-
'sqs',
69
-
endpoint_url=SCW_SQS_URL,
70
-
aws_access_key_id=sqs_access_key,
71
-
aws_secret_access_key=sqs_secret_access_key,
68
+
'sqs',
69
+
endpoint_url=SCW_SQS_URL,
70
+
aws_access_key_id=sqs_access_key,
71
+
aws_secret_access_key=sqs_secret_access_key,
72
72
region_name='fr-par')
73
73
74
74
for age, titleline inzip(ages, titlelines):
@@ -117,7 +117,7 @@ Next, let's create our consumer function. When receiving a message containing th
117
117
Lastly, we write the information into the database. *To keep the whole process completely automatic the*`CREATE_TABLE_IF_NOT_EXISTS`*query is run each time. If you integrate the functions into an existing database, there is no need for it.*
@@ -136,7 +136,7 @@ As explained in the [Scaleway Functions documentation](/serverless/functions/how
136
136
137
137
## Create a Terraform file to provision the necessary Scaleway resources
138
138
139
-
For the purposes of this tutorial, we show how to provision all resources via Terraform.
139
+
For the purposes of this tutorial, we show how to provision all resources via Terraform.
140
140
141
141
<Messagetype="tip">
142
142
If you do not want to use Terraform, you can also create the required resources via the [console](https://console.scaleway.com/), the [Scaleway API](https://www.scaleway.com/en/developers/api/), or any other [developer tool](https://www.scaleway.com/en/developers/). Remember that if you do so, you will need to set up environment variables for functions as previously specified. The following documentation may help create the required resources:
@@ -149,7 +149,7 @@ If you do not want to use Terraform, you can also create the required resources
149
149
1. Create a directory called `terraform` (at the same level as the `scraper` and `consumer` directories created in the previous steps).
150
150
2. Inside it, create a file called `main.tf`.
151
151
3. In the file you just created, add the code below to set up the [Scaleway Terraform provider](https://registry.terraform.io/providers/scaleway/scaleway/latest/docs) and your Project:
152
-
```
152
+
```hcl
153
153
terraform {
154
154
required_providers {
155
155
scaleway = {
@@ -167,7 +167,7 @@ If you do not want to use Terraform, you can also create the required resources
167
167
}
168
168
```
169
169
4. Still in the same file, add the code below to provision the SQS resources: SQS activation for the project, separate credentials with appropriate permissions for producer and consumer, and an SQS queue:
@@ -202,7 +202,7 @@ If you do not want to use Terraform, you can also create the required resources
202
202
}
203
203
```
204
204
5. Add the code below to provision the Managed Database for PostgreSQL resources. Note that here we are creating a random password and using it for the default and worker user:
6. Add the code below to provision the functions resources. First, activate the namespace, then locally zip the code and create the functions in the cloud. Note that we are referencing variables from other resources, to completely automate the deployment process:
262
-
```
262
+
```hcl
263
263
locals {
264
264
scraper_folder_path = "../scraper"
265
265
consumer_folder_path = "../consumer"
@@ -354,17 +354,17 @@ If you do not want to use Terraform, you can also create the required resources
354
354
}
355
355
}
356
356
```
357
-
Note that a folder `archives` needs to be created manually if you started from scratch. It is included in the git repository.
358
-
7. Add the code below to provision the triggers resources. The cron trigger activates at the minutes `[0, 15, 30, 45]` of every hour. No arguments are passed, but we could do so by specifying them in JSON format in the `args` parameter.
359
-
```
357
+
Note that a folder `archives` needs to be created manually if you started from scratch. It is included in the git repository.
358
+
7. Add the code below to provision the triggers resources. The cron trigger activates at the minutes `[0, 15, 30, 45]` of every hour. No arguments are passed, but we could do so by specifying them in JSON format in the `args` parameter.
@@ -378,22 +378,22 @@ Terraform makes this very straightforward. To provision all the resources and ge
378
378
```
379
379
cd terraform
380
380
terraform init
381
-
terraform plan
381
+
terraform plan
382
382
terraform apply
383
383
```
384
384
385
385
### How to check that everything is working correctly
386
386
387
387
Go to the [Scaleway console](https://console.scaleway.com/), and check the logs and metrics for Serverless Functions' execution and Messaging and Queuing SQS queue statistics.
388
388
389
-
To make sure the data is correctly stored in the database, you can [connect to it directly](/managed-databases/postgresql-and-mysql/how-to/connect-database-instance/) via a CLI tool such as `psql`.
389
+
To make sure the data is correctly stored in the database, you can [connect to it directly](/managed-databases/postgresql-and-mysql/how-to/connect-database-instance/) via a CLI tool such as `psql`.
390
390
Retrieve the instance IP and port of your Managed Database from the console, under the [Managed Database section](https://console.scaleway.com/rdb/instances).
391
391
Use the following command to connect to your database. When prompted for a password, you can find it by running `terraform output -json`.
When you are done testing, don't forget to clean up! To do so, run:
396
+
When you are done testing, don't forget to clean up! To do so, run:
397
397
```
398
398
cd terraform
399
399
terraform destroy
@@ -405,7 +405,7 @@ We have shown how to asynchronously decouple the producer and the consumer using
405
405
While the volume of data processed in this example is quite small, thanks to the Messaging and Queuing SQS queue's robustness and the auto-scaling capabilities of the Serverless Functions, you can adapt this example to manage larger workloads.
406
406
407
407
Here are some possible extensions to this basic example:
408
-
- Replace the simple proposed logic with your own. What about counting how many times some keywords (e.g: copilot, serverless, microservice) appear in Hacker News articles?
408
+
- Replace the simple proposed logic with your own. What about counting how many times some keywords (e.g: copilot, serverless, microservice) appear in Hacker News articles?
409
409
- Define multiple cron triggers for different websites and pass the website as an argument to the function. Or, create multiple functions that feed the same queue.
410
410
- Use a [Serverless Container](/serverless/containers/quickstart/) instead of the consumer function, and use a command line tool such as `htmldoc` or `pandoc` to convert the scraped articles to PDF and upload the result to a [Scaleway Object Storage](/storage/object/quickstart/) S3 bucket.
411
411
- Replace the Managed Database for PostgreSQL with a [Scaleway Serverless Database](/serverless/sql-databases/quickstart/), so that all the infrastructure lives in the serverless ecosystem! *Note that at the moment there is no Terraform support for Serverless Database, hence the choice here to use Managed Database for PostgreSQL*.
Copy file name to clipboardExpand all lines: tutorials/deploy-laravel-on-serverless-containers/index.mdx
+33-33Lines changed: 33 additions & 33 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -7,7 +7,7 @@ content:
7
7
paragraph: This tutorial provides a step-by-step guide for deploying a containerized Laravel application on the Scaleway cloud platform.
8
8
tags: laravel php docker nginx fpm
9
9
hero: assets/scaleway-umami.webp
10
-
categories:
10
+
categories:
11
11
- containers
12
12
- container-registry
13
13
dates:
@@ -42,7 +42,7 @@ Laravel applications make use of [queues](https://laravel.com/docs/10.x/queues)
42
42
2. Create a queue. In this example, we create a `Standard` queue (At-least-once delivery, the order of messages is not preserved) with the default parameters. This queue will be the default queue used by our application.
43
43
44
44
<Lightboxsrc="scaleway-serverless-messaging-queue.webp"alt="Scaleway Console Messaging and Queuing Queue" />
45
-
45
+
46
46
3. Generate credentials. In this example, we generate the credentials with `read` and `write` access.
47
47
48
48
<Lightboxsrc="scaleway-serverless-messaging-credential.webp"alt="Scaleway Console Messaging and Queuing Credential" />
@@ -53,7 +53,7 @@ In this section, we will focus on building the containerized image. With Docker,
53
53
54
54
1. Create the Dockerfile: we create a `Dockerfile` which is a text file that contains instructions for Docker to build the image. In this example, we specify the base image as `php:fpm-alpine`, install and enable the necessary php dependencies with [`install-php-extensions`](https://github.com/mlocati/docker-php-extension-installer), and determine the commands to be executed at startup.
55
55
56
-
```
56
+
```dockerfile
57
57
# Dockerfile
58
58
FROM --platform=linux/amd64 php:8.2.6-fpm-alpine3.18
59
59
@@ -84,9 +84,9 @@ In this section, we will focus on building the containerized image. With Docker,
2. Create the supervisor configuration file. [Supervisor](http://supervisord.org/) is a reliable and efficient process control system for managing and monitoring processes. This is used as multiple processes are running within the container. In this example, we create a `stubs/supervisor/supervisord.conf` file with the following configuration to start the web server Nginx, the php-fpm pool, and 5 workers:
87
-
```
87
+
```conf
88
88
# stubs/supervisor/supervisord.conf
89
-
[supervisord]
89
+
[supervisord]
90
90
nodaemon=true
91
91
logfile=/dev/null
92
92
logfile_maxbytes=0
@@ -128,43 +128,43 @@ In this section, we will focus on building the containerized image. With Docker,
128
128
129
129
3. Create web server configuration files. Nginx will be used to serve the static assets and to forward the requests to the php-fpm pool for processing. In this example, we create the following configuration files `stubs/nginx/http.d/default.conf` and `stubs/nginx/nginx.conf`.
@@ -183,11 +183,11 @@ In this section, we will focus on building the containerized image. With Docker,
183
183
pid /var/run/nginx.pid;
184
184
user nginx;
185
185
worker_processes auto;
186
-
```
186
+
```
187
187
188
188
4. Create the php-fpm configuration file. The configuration `stubs/php/php-fpm.d/zz-docker.conf` file should be created, and the php-fpm pool configured to render the dynamic pages of the Laravel application. Depending on the needs of your application, you might have to fine-tune the configuration of the process manager. Further information is available in the [php manual](https://www.php.net/manual/en/install.fpm.configuration.php).
189
-
190
-
```
189
+
190
+
```conf
191
191
[global]
192
192
daemonize = no
193
193
@@ -197,27 +197,27 @@ In this section, we will focus on building the containerized image. With Docker,
197
197
listen.group = www-data
198
198
listen.mode = 0660
199
199
200
-
pm = dynamic
201
-
pm.max_children = 75
202
-
pm.start_servers = 10
203
-
pm.min_spare_servers = 5
204
-
pm.max_spare_servers = 20
200
+
pm = dynamic
201
+
pm.max_children = 75
202
+
pm.start_servers = 10
203
+
pm.min_spare_servers = 5
204
+
pm.max_spare_servers = 20
205
205
pm.process_idle_timeout = 10s
206
206
```
207
207
208
208
5. Build the docker image.
209
-
```
209
+
```sh
210
210
docker build -t my-image .
211
211
```
212
212
213
-
## Creating Container Registry
213
+
## Creating Container Registry
214
214
215
215
1.[Create a Scaleway Container Registry namespace](/containers/container-registry/how-to/create-namespace/) in the `PAR` region. Set the visibility to `Private` to avoid having your container retrieved without proper authentication and authorization.
@@ -237,7 +237,7 @@ In this section, we will focus on building the containerized image. With Docker,
237
237
The Scaleway documentation website provides a Quickstart on how to [create and manage a Serverless Container Namespace](/serverless/containers/quickstart/).
238
238
239
239
1. Create a Serverless Containers namespace. In this example, we create the `my-laravel-application` namespace and configure the environment variables and secrets necessary for our application. In particular, we must add all the variables needed to connect to the previously created SQS/SNS queue.
240
-
240
+
241
241
By default, Laravel expects the following environment variables/secrets to be filled in for queues: `AWS_ACCESS_KEY_ID`, `AWS_SECRET_ACCESS_KEY`, `AWS_DEFAULT_REGION`, `QUEUE_CONNECTION`, `SQS_PREFIX` and `SQS_QUEUE`.
242
242
243
243
2. Deploy the application. Click **+ Deploy a Container** once the namespace is created, and follow the instructions of the creation wizard. Select the registry namespace and the previously uploaded Docker image and configure the listening port (the Nginx web server is listening on port 80). For the CPU and memory, define at least 560mVPCU and 256 MB respectively. To reduce the limitations due to [cold start](/serverless/containers/concepts/#cold-start), we will run at least 1 instance.
@@ -274,7 +274,7 @@ By default, some metrics will be available in the Scaleway console. However, to
274
274
275
275
To test the load on the application, there is a basic test route that pushes a job into the queue and returns the welcome page.
276
276
277
-
```php
277
+
```php
278
278
# routes/web.php
279
279
use App\Jobs\ProcessPodcast;
280
280
@@ -287,7 +287,7 @@ Route::get('/test', function () {
287
287
```
288
288
The job does nothing but wait for a couple of seconds.
289
289
290
-
```php
290
+
```php
291
291
# app/Jobs/ProcessPodcast
292
292
293
293
class ProcessPodcast implements ShouldQueue
@@ -300,11 +300,11 @@ class ProcessPodcast implements ShouldQueue
300
300
```
301
301
Then, use `hey` to send 400 requests (20 concurrent requests) to this route.
302
302
303
-
```
303
+
```sh
304
304
hey -n 400 -q 20 https://example.com/test
305
305
```
306
306
307
-
We can see that our deployment is not sufficiently sized to handle such workload and the response times are far from ideal.
307
+
We can see that our deployment is not sufficiently sized to handle such workload and the response times are far from ideal.
0 commit comments