You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: pages/edge/configuration.mdx
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -349,7 +349,7 @@ Available values:
349
349
* `single_concurrency`
350
350
351
351
Special scaling mode for apps that only support **a single request at a time**.
352
-
You should enable this mode if your app can not handle multiple requests concurrently.
352
+
You should enable this mode if your app cannot handle multiple requests concurrently.
353
353
354
354
Only a single request will be sent to each instance at a time.
355
355
Edge will dynamically scale up additional instances of your app as required by the incoming request volume, and shut them down when they are no longer needed.
Copy file name to clipboardExpand all lines: pages/edge/faq.mdx
+5-5Lines changed: 5 additions & 5 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -11,18 +11,18 @@ There will soon be a way to restrict your apps to only run in specified regions,
11
11
## What is threshold for 'spill over' when nodes are overloaded? Are allocations to edge nodes based on vCPU or request volume ?
12
12
13
13
The system will load balance dynamically based on server load - this is not something that you should assume any functional transparency about. Edge will dynamically determine how many instances of your app to start up in a given region/server.
14
-
Note that we will add configuration for this, eg: you will be able to specify that you want at most X instances in a given region, or that each instance of your app should handle at most X concurrent requests, with new instances launched dynamically if that threshold is exceeded.
14
+
Note that we will add configuration for this, e.g., you will be able to specify that you want at most X instances in a given region, or that each instance of your app should handle at most X concurrent requests, with new instances launched dynamically if that threshold is exceeded.
15
15
16
16
## How long is the 'short idle period' before server termination. If post-request server-side processing is required, how can we prevent a shutdown? Is this determined by CPU usage, a timeout post-last request, or perhaps an interceptor in the WASIX VM monitoring for specific IO syscalls, such as `accept4` ?
17
17
18
-
Currently this is a fixed value of a few minutes, but this can not be assumed, it may change at any time, and workloads can get evicted under contention.
19
-
What we will add a graceful termination flow like on Linux: instances will receive a SIGTERM, and will have some time to clean up and finish remaining work, with hard termination only happening after a well-known grace period.
18
+
Currently this is a fixed value of a few minutes, but this cannot be assumed, it may change at any time, and workloads can get evicted under contention.
19
+
What we will add a graceful termination flow like on Linux: instances will receive a SIGTERM, and will have some time to clean up and finish remaining work, with hard termination only happening after a well-known grace period.
20
20
21
21
## Do we need utilities like pgBouncer ?
22
22
23
23
It would probably be advisable.
24
-
It's possible for multiple versions of your app to run on multiple servers in the same region, and for a bunch of new instances to get spawned, so you might have quite bursty connection counts.
25
-
If you need more control, one of the features that we hope to finish in this quarter (before end of year) is persistent instances, which will keep running, and won't be dynamically spawned or terminated.
24
+
It's possible for multiple versions of your app to run on multiple servers in the same region, and for a bunch of new instances to get spawned, so you might have quite bursty connection counts.
25
+
If you need more control, one of the features that we hope to finish in this quarter (before end of year) is persistent instances, which will keep running, and won't be dynamically spawned or terminated.
26
26
27
27
A note about database locality:
28
28
We will have configuration that allows you to restrict your app to certain regions, so if you want to keep your instances close to the location of your database, that will be possible.
Copy file name to clipboardExpand all lines: pages/edge/guides/volumes.mdx
+10-10Lines changed: 10 additions & 10 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -6,28 +6,28 @@ import ImageLoader from "@components/ImageLoader";
6
6
importdashboardfrom"@assets/app-dashboard.jpeg";
7
7
8
8
9
-
# Using peristent storage in Wasmer Edge Apps
9
+
# Using persistent storage in Wasmer Edge Apps
10
10
11
-
In this tutorial, you will learn how to setup peristent storage for your edge apps
11
+
In this tutorial, you will learn how to setup persistent storage for your edge apps
12
12
13
-
Lets start with creating an edge application
13
+
Let's start with creating an edge application
14
14
15
15
You can quickstart with an application from one of our templates
16
16
```bash copy
17
17
wasmer app create --template static-website
18
18
```
19
19
20
-
Now, lets edit `app.yaml` to setup persistent volume
20
+
Now, let's edit `app.yaml` to setup persistent volume
21
21
22
22
Add
23
23
```yaml copy
24
24
volumes:
25
25
- name: data
26
26
mount: /public/things_i_want_to_persist
27
27
```
28
-
to `app.yaml`. This will mount a peristent volume named `data` into `/public/persistent_volume` directory of your application
28
+
to `app.yaml`. This will mount a persistent volume named `data` into `/public/persistent_volume` directory of your application
29
29
30
-
Now, lets re-deploy the app
30
+
Now, let's re-deploy the app
31
31
32
32
```bash
33
33
wasmer deploy
@@ -49,9 +49,9 @@ You can use any s3 client to retrieve and upload data to your bucket
49
49
/>
50
50
51
51
## via s3 clients
52
-
You can provide the credentials spesific to your app to any s3 client for accessing your volume. The credentials are available in your app's dashboard
52
+
You can provide the credentials specific to your app to any s3 client for accessing your volume. The credentials are available in your app's dashboard
53
53
54
-
Also, wasmer cli has a convinent way to configure rclone. `wasmer app volume configure` will print rclone configuration for connecting your apps volume
54
+
Also, wasmer cli has a convenient way to configure rclone. `wasmer app volume configure` will print rclone configuration for connecting your apps volume
55
55
56
56
## via wasmer.io
57
57
In the storage section of your app's dashboard, you can see links to browse your volumes.
@@ -61,7 +61,7 @@ Note: You can only view, you can't upload files with the s3 frontend yet.
61
61
62
62
# Proving persistency
63
63
64
-
Now, lets upload "hello world" file to the volume. Get your volumes credentials and upload using an s3 client
64
+
Now, let's upload "hello world" file to the volume. Get your volumes credentials and upload using an s3 client
The template app we used will show the file you uploaded at https://your_app_url.wasmer.app/things_i_want_to_persist/index.html
71
71
You can also use the s3 client or the s3 frontend that we host for you to view your uploaded file
72
72
73
-
Now, lets force a container restart.
73
+
Now, let's force a container restart.
74
74
When updating your app with `wasmer deploy`, a new instance of your app will be created. So a new filesystem will be in use for your app and the volumes will be mounted
75
75
After running `wasmer deploy`, you can see your `index.html` is still there. View from the app at url https://your_app_url.wasmer.app/things_i_want_to_persist/index.html, or from your s3 client, or from our hosted s3 frontend!
Copy file name to clipboardExpand all lines: pages/edge/vs/amazon-lambda.mdx
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -18,5 +18,5 @@ Wasmer Edge and Amazon Lambda differ in significant ways:
18
18
- Wasmer Edge is built on **simpler and more scalable** technology than Lambda (which is built on top of Firecracker)
19
19
- Wasmer Edge **doesn't require code changes in your application** to run, while you need to adapt your code to use the custom Lambda SDK to run.
20
20
- Wasmer Edge **lets you reuse the same architecture to deploy your static assets and your dynamic websites**, while Amazon Lambda focuses only on the dynamic requests and recommends using S3 for your static assets.
21
-
- Wasmer Edge can run UDP and other protocols while Amazon Lambda can not.
21
+
- Wasmer Edge can run UDP and other protocols while Amazon Lambda cannot.
22
22
- Amazon Lambda currently has more trigger options than Wasmer Edge
0 commit comments