Skip to content

Commit 9058bff

Browse files
authored
Fix versions in Synthetics docker commands (#2637)
This PR replaces manually entered version numbers with the `version.stack` variable
1 parent 1a6fd45 commit 9058bff

File tree

1 file changed

+26
-6
lines changed

1 file changed

+26
-6
lines changed

solutions/observability/synthetics/monitor-resources-on-private-networks.md

Lines changed: 26 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -65,22 +65,42 @@ The `elastic-agent-complete` Docker image is the only way to have all available
6565

6666
To pull the Docker image run:
6767

68-
```sh
69-
docker pull docker.elastic.co/elastic-agent/elastic-agent-complete:8.16.1
68+
::::{tab-set}
69+
:group: docker
70+
:::{tab-item} Latest
71+
:sync: latest
72+
73+
```shell subs=true
74+
docker pull docker.elastic.co/elastic-agent/elastic-agent-complete:{{version.stack}}
75+
```
76+
77+
:::
78+
79+
:::{tab-item} Specific version
80+
:sync: specific
81+
82+
```sh subs=true
83+
docker pull docker.elastic.co/elastic-agent/elastic-agent-complete:<SPECIFIC.VERSION.NUMBER>
7084
```
7185

86+
You can download and install a specific version of the {{stack}} by replacing `<SPECIFIC.VERSION.NUMBER>` with the version number you want. For example, you can replace `<SPECIFIC.VERSION.NUMBER>` with {{version.stack.base}}.
87+
:::
88+
89+
::::
90+
7291
Then enroll and run an {{agent}}. You’ll need an enrollment token and the URL of the {{fleet-server}}. You can use the default enrollment token for your policy or create new policies and [enrollment tokens](/reference/fleet/fleet-enrollment-tokens.md) as needed.
7392

7493
For more information on running {{agent}} with Docker, refer to [Run {{agent}} in a container](/reference/fleet/elastic-agent-container.md).
7594

76-
```sh
95+
96+
```shell subs=true
7797
docker run \
7898
--env FLEET_ENROLL=1 \
7999
--env FLEET_URL={fleet_server_host_url} \
80100
--env FLEET_ENROLLMENT_TOKEN={enrollment_token} \
81101
--cap-add=NET_RAW \
82102
--cap-add=SETUID \
83-
--rm docker.elastic.co/elastic-agent/elastic-agent-complete:8.16.1
103+
--rm docker.elastic.co/elastic-agent/elastic-agent-complete:{{version.stack}}
84104
```
85105

86106
::::{important}
@@ -119,15 +139,15 @@ By default {{private-location}}s are configured to allow two simultaneous browse
119139

120140
It is critical to allocate enough memory and CPU capacity to handle configured limits. Resource requirements will vary depending on simultaneous workload and monitor complexity:
121141

122-
**For browser monitors**: Start by allocating at least 2 GiB of memory and two cores _per browser instance_ to ensure consistent performance and avoid out-of-memory errors. Then adjust as needed.
142+
**For browser monitors**: Start by allocating at least 2 GiB of memory and two cores _per browser instance_ to ensure consistent performance and avoid out-of-memory errors. Then adjust as needed.
123143
**For tcp, http, icmp**: Much less memory is needed, start by allocating at least 512MiB of memory and two cores _globally_. While this will be enough to run a large number of lightweight monitors, it is recommended to track the resource usage and adjust accordingly.
124144

125145
Example: For a private location expected to run 2 concurrent browser monitors and 100 HTTP checks, the recommended allocation is 2 * (2 GiB + 2 vCPU) + (512 MiB + 2 vCPU) => 4,5 GiB + 6 vCPU.
126146

127147
### Known limitations on vertical scaling
128148

129149
- A single private location will not scale beyond 10,000 monitors. Exceeding this number will result in agent degradation and inconsistent execution, regardless of the resources allocated.
130-
- Complex monitor configuration can disproportionately increase the private location policy size, leading to agent communication errors and degradation even if the limit mentioned above hasn't been reached.
150+
- Complex monitor configuration can disproportionately increase the private location policy size, leading to agent communication errors and degradation even if the limit mentioned above hasn't been reached.
131151

132152
If you're facing one of these scenarios, it is likely that the private location has grown too large and needs to be split into smaller locations, each alloted a portion of the original location monitors.
133153

0 commit comments

Comments
 (0)