You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs/architecture/autoscaling.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -148,7 +148,7 @@ mean per pod = 90 / 1 = 90
148
148
149
149
[Scaling to zero](/openfaas-pro/scale-to-zero) is an opt-in feature on a per function basis. It can be used in combination with any scaling mode, including *Static scaling*
150
150
151
-
## Testing out the various modes
151
+
## Examples of scaling modes
152
152
153
153
A quick primer on [hey](https://github.com/rakyll/hey) a load testing tool written in Go.
Copy file name to clipboardExpand all lines: docs/openfaas-pro/comparison.md
+2-5Lines changed: 2 additions & 5 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -54,7 +54,7 @@ Did you know? OpenFaaS Pro's autoscaling engine can scale many different types o
54
54
| Maximum replicas per function | 5 | 1 | No limit applied | as per Standard |
55
55
| Scale from Zero | Not available | Supported | Supported, with additional checks for Istio | as per Standard |
56
56
| Zero downtime updates | Not available | Not available | Supported with readiness probes and rolling updates | as per Standard |
57
-
| Autoscaling strategy | RPS | Not applicable |[CPU utilization, Capacity (inflight requests), RPS, async queue-depth and Custom (e.g. Memory)](/architecture/autoscaling)| as per Standard |
57
+
| Autoscaling strategy | RPS | Not applicable |[CPU utilization, Capacity (inflight requests), RPS, async queue-depth and Custom (e.g. Memory)](/architecture/autoscaling)| as per Standard, plus queue-based autoscaling|
58
58
| Autoscaling granularity | One global rule | Not applicable | Configurable per function | as per Standard |
59
59
60
60
Data-driven, intensive, or long running functions are best suited to capacity-based or queue-based autoscaling, which is only available in OpenFaaS Pro.
@@ -73,7 +73,7 @@ Scaling to zero is also a commercial feature, which can be opted into on a per f
73
73
| UI Dashboard | Legacy UI (in code-freeze) | Dashboard is an optional add-on |[New UI dashboard](/openfaas-pro/dashboard) with metrics, logs & CI integration | as per Standard, but with support for multiple namespaces |
74
74
| Consume secrets in `faas-cli build` for npm, Go and Pypy | Not available | Via build-time secrets | Via build-time secrets | as per Standard |
75
75
| Kubernetes service accounts for functions | n/a | n/a |[Supported per function](/reference/workloads)| as per Standard |
| Metrics | Basic function metrics | As per Standard | Function, HTTP, CPU/RAM usage, and async/queue metrics | as per Standard |
78
78
| CPU & RAM utilization | Not available | As per Standard | Integrated with Prometheus metrics, OpenFaaS REST API & CLI | as per Standard |
79
79
| Grafana Dashboards | n/a | As per Standard | 4x dashboards supplied in [Customer Community](https://github.com/openfaas/customers) - overview, spotlight for debugging a function, queue-worker and Function Builder API | as per Standard |
@@ -83,9 +83,6 @@ Scaling to zero is also a commercial feature, which can be opted into on a per f
83
83
| Image Pull Policy (for air-gap) | Always | As per Standard |`Always`, `IfNotPresent` or `Never`| as per Standard |
84
84
| GPU support | Not available | Available for core services | Available for functions via Profiles | as per Standard |
85
85
86
-
> Did you know? Synadia, the vendor of NATS Streaming announced the product is now deprecated, and it will receive no updates from June 2023 onwards. OpenFaaS Ltd developed an alternative based upon their newest product JetStream. [Learn more about JetStream for OpenFaaS](https://docs.openfaas.com/openfaas-pro/jetstream/)
87
-
88
-
Learn how to deploy functions via kubectl: [Function CRD](/openfaas-pro/function-crd)
Copy file name to clipboardExpand all lines: docs/openfaas-pro/jetstream.md
+18-14Lines changed: 18 additions & 14 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,10 +1,16 @@
1
1
# Queue Worker
2
2
3
-
The Queue Worker is a batteries-included, scale-out queue for invoking functions asynchronously.
3
+
> Note: This feature is included for [OpenFaaS Standard & For Enterprises](https://openfaas.com/pricing/) customers.
4
4
5
-
This page is primarily concerned with how to configure the Queue Worker, you can learn about [asynchronous invocations here](/reference/async).
5
+
The Queue Worker is part of the built-in queue system for OpenFaaS built upon NATS JetStream.
6
6
7
-
> Note: This feature is included for [OpenFaaS Standard & For Enterprises](https://openfaas.com/pricing/) customers.
7
+
It's a batteries-included solution, that's used at scale by many OpenFaaS customers in production every day.
8
+
9
+
Async invocations can be submitted over HTTP by your own code or through an event-connector.
10
+
11
+
This page is primarily concerned with how to configure the Queue Worker.
12
+
13
+
You can learn about [asynchronous invocations here](/reference/async).
8
14
9
15
## Async use cases
10
16
@@ -26,15 +32,13 @@ On the blog we show reference examples built upon these architectural patterns:
26
32
27
33
## Terminology
28
34
29
-
* NATS - an open source messaging system hosted by the [CNCF](https://www.cncf.io/)
30
-
* NATS JetStream - a messaging system built on top of NATS for durable queues and message streams
35
+
*[NATS](https://nats.io/) - an open source messaging system hosted by the [CNCF](https://www.cncf.io/)
36
+
*[NATS JetStream](https://docs.nats.io/nats-concepts/jetstream) - a messaging system built on top of NATS for durable queues and message streams
31
37
32
38
1. A JetStream Server is the original NATS Core project, running in "jetstream mode"
33
39
2. A Stream is a message store it is used in OpenFaaS to queue async invocation messages.
34
40
3. A Consumer is a stateful view of a stream when clients consume messages from a stream the consumer keeps track of which messages were delivered and acknowledged.
35
41
36
-
Learn more about [NATS JetStream](https://docs.nats.io/nats-concepts/jetstream)
37
-
38
42
## Installation
39
43
40
44
**Embedded NATS server**
@@ -67,9 +71,11 @@ Instructions for a recommended NATS production deployment are available for cust
67
71
68
72
### Queue-based scaling for functions
69
73
70
-
The queue-worker uses a shared NATS Stream and NATS Consumer by default, which works well with many of the existing [autoscaling strategies](/reference/async/#autoscaling).
74
+
The queue-worker uses a shared NATS Stream and NATS Consumer by default, which works well with many of the existing [autoscaling strategies](/reference/async/#autoscaling). Requests are processed in a FIFO order, and it is possible for certain functions to dominate or starve the queue.
71
75
72
-
However, if you wish to scales functions based upon the queue depth for each, you can set up the queue-worker to scale its NATS Consumers dynamically for each function.
76
+
A fairer approach is to scale functions based upon their respective queue depth, with a consumer created for each function as and when it is needed.
77
+
78
+
The `mode` parameter can be set to `static` (default) or `function`.
73
79
74
80
```yaml
75
81
jetstreamQueueWorker:
@@ -78,13 +84,11 @@ jetstreamQueueWorker:
78
84
inactiveThreshold: 30s
79
85
```
80
86
81
-
The `mode` parameter can be set to `static` or `function`.
82
-
83
-
If set to `static`, the queue-worker will scale its NATS Consumers based upon the number of replicas of the queue-worker. This is the default mode, and ideal for development, or constrained environments.
87
+
* If set to `static`, the queue-worker will scale its NATS Consumers based upon the number of replicas of the queue-worker. This is the default mode, and ideal for development, or constrained environments.
84
88
85
-
If set to `function`, the queue-worker will scale its NATS Consumers based upon the number of functions that are active in the queue. This is ideal for production environments where you want to [scale your functions based upon the queue depth](/reference/autoscaling/). It also gives messages queued at different times a fairer chance of being processed earlier.
89
+
* If set to `function`, the queue-worker will scale its NATS Consumers based upon the number of functions that are active in the queue. This is ideal for production environments where you want to [scale your functions based upon the queue depth](/reference/autoscaling/). It also gives messages queued at different times a fairer chance of being processed earlier.
86
90
87
-
The `inactiveThreshold` parameter can be used to set the threshold for when a function is considered inactive. If a function is inactive for longer than the threshold, the queue-worker will delete the NATS Consumer for that function.
91
+
* The `inactiveThreshold` parameter can be used to set the threshold for when a function is considered inactive. If a function is inactive for longer than the threshold, the queue-worker will delete the NATS Consumer for that function.
0 commit comments