Skip to content
Merged
Changes from 7 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Original file line number Diff line number Diff line change
Expand Up @@ -65,22 +65,42 @@ The `elastic-agent-complete` Docker image is the only way to have all available

To pull the Docker image run:

```sh
docker pull docker.elastic.co/elastic-agent/elastic-agent-complete:8.16.1
::::{tab-set}
:group: docker
:::{tab-item} Latest
:sync: latest

```shell subs=true
docker pull docker.elastic.co/elastic-agent/elastic-agent-complete:{{version.stack}}
```

:::

:::{tab-item} Specific version
:sync: specific

```sh subs=true
docker pull docker.elastic.co/elastic-agent/elastic-agent-complete:<SPECIFIC.VERSION.NUMBER>
```

You can download and install a specific version of the {{stack}} by replacing `<SPECIFIC.VERSION.NUMBER>` with the version number you want. For example, you can replace `<SPECIFIC.VERSION.NUMBER>` with {{version.stack.base}}.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Does the version number always need to be three digits? For example, would 9.0 work?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think it's always 3. I'll check before merging.

:::

::::

Then enroll and run an {{agent}}. You’ll need an enrollment token and the URL of the {{fleet-server}}. You can use the default enrollment token for your policy or create new policies and [enrollment tokens](/reference/fleet/fleet-enrollment-tokens.md) as needed.

For more information on running {{agent}} with Docker, refer to [Run {{agent}} in a container](/reference/fleet/elastic-agent-container.md).

```sh

```shell subs=true
docker run \
--env FLEET_ENROLL=1 \
--env FLEET_URL={fleet_server_host_url} \
--env FLEET_ENROLLMENT_TOKEN={enrollment_token} \
--cap-add=NET_RAW \
--cap-add=SETUID \
--rm docker.elastic.co/elastic-agent/elastic-agent-complete:8.16.1
--rm docker.elastic.co/elastic-agent/elastic-agent-complete:{{version.stack}}
```

::::{important}
Expand Down Expand Up @@ -119,15 +139,15 @@ By default {{private-location}}s are configured to allow two simultaneous browse

It is critical to allocate enough memory and CPU capacity to handle configured limits. Resource requirements will vary depending on simultaneous workload and monitor complexity:

**For browser monitors**: Start by allocating at least 2 GiB of memory and two cores _per browser instance_ to ensure consistent performance and avoid out-of-memory errors. Then adjust as needed.
**For browser monitors**: Start by allocating at least 2 GiB of memory and two cores _per browser instance_ to ensure consistent performance and avoid out-of-memory errors. Then adjust as needed.
**For tcp, http, icmp**: Much less memory is needed, start by allocating at least 512MiB of memory and two cores _globally_. While this will be enough to run a large number of lightweight monitors, it is recommended to track the resource usage and adjust accordingly.

Example: For a private location expected to run 2 concurrent browser monitors and 100 HTTP checks, the recommended allocation is 2 * (2 GiB + 2 vCPU) + (512 MiB + 2 vCPU) => 4,5 GiB + 6 vCPU.

### Known limitations on vertical scaling

- A single private location will not scale beyond 10,000 monitors. Exceeding this number will result in agent degradation and inconsistent execution, regardless of the resources allocated.
- Complex monitor configuration can disproportionately increase the private location policy size, leading to agent communication errors and degradation even if the limit mentioned above hasn't been reached.
- Complex monitor configuration can disproportionately increase the private location policy size, leading to agent communication errors and degradation even if the limit mentioned above hasn't been reached.

If you're facing one of these scenarios, it is likely that the private location has grown too large and needs to be split into smaller locations, each alloted a portion of the original location monitors.

Expand Down
Loading