You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs/architecture/performance.md
+5-1Lines changed: 5 additions & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -25,7 +25,7 @@ HA:
25
25
Project tuning:
26
26
27
27
*[ ] I have extended or removed memory limits / quotas for each service and function
28
-
*[ ] I have created my own function using one of the new HTTP templates (see below for a list)
28
+
*[ ] I have created my own function using one of the new HTTP templates and the of-watchdog (see below for a list)
29
29
*[ ] I understand the difference between the original default watchdog which forks one process per request and the new of-watchdog's HTTP mode and I am using that
30
30
*[ ] I have turned off `write_debug` and `read_debug` so that the logs for the function are kept sparse
31
31
*[ ] I am monitoring / collecting logs from the core services and function under test
@@ -72,6 +72,10 @@ The test environment needs to replicate the production environment you are likel
72
72
73
73
There are several sample functions provided in the project, but that does not automatically qualify them for benchmarking or load-testing. It's important to create your own function and understand exactly what is being put into it so you can measure it efficiently.
74
74
75
+
* Not using the of-watchdog
76
+
77
+
Only the watchdog and of-watchdog implement the correct healthcheck and shutdown signals to be compatible with Kubernetes' startup, scaling and termination mechanisms. If you are running a container exposing port `8080` which does not use of-watchdog then the results are not representative. You will need to mimick exactly the mechanisms in the of-watchdog or use it as a shim.
78
+
75
79
* Only picking the best/worst case figure
76
80
77
81
When using a scientific method you need to carry out multiple test runs and account for caching/memory/paging of the operating system including any additional background processes that may be running. The 99th percentile figures should be used, not the best or worst case figure from arbitary runs.
0 commit comments