Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion .github/ISSUE_TEMPLATE/community_component_proposal.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -17,7 +17,7 @@ body:
- type: checkboxes
attributes:
label: Community component maintainer
description: Are you willing to be the maintainer for this component in Alloy?
description: Are you willing to be the maintainer for this component in Alloy?
options:
- label: I agree to be a maintainer for this component.
- type: textarea
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -7,6 +7,6 @@ index 167ea9134a..e07aecfe83 100644
tap = formula.tap
return if tap.nil?
+ return if args.force?

# if we haven't already found open requests, try for an exact match across all pull requests
GitHub.check_for_duplicate_pull_requests(
2 changes: 1 addition & 1 deletion .github/workflows/auto-apply-component-labels.yml
Original file line number Diff line number Diff line change
Expand Up @@ -15,7 +15,7 @@ jobs:
- uses: actions/checkout@93cb6efe18208431cddfb8368fd83d5badbf9bfd # v5.0.1
with:
persist-credentials: false

- name: Run apply-component-labels.sh
run: ./.github/workflows/scripts/apply-component-labels.sh
env:
Expand Down
4 changes: 2 additions & 2 deletions .github/workflows/backport.yml
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@ permissions:
jobs:
main:
runs-on: ubuntu-latest
# For now, only run this on the main repo.
# For now, only run this on the main repo.
if: github.repository == 'grafana/alloy'
# These permissions are needed to assume roles from Github's OIDC.
permissions:
Expand All @@ -36,7 +36,7 @@ jobs:
ALLOYBOT_APP_ID=alloybot:app_id
ALLOYBOT_PRIVATE_KEY=alloybot:private_key
export_env: false

- uses: actions/create-github-app-token@d72941d797fd3113feb6b93fd0dec494b13a2547 # v1.12.0
id: app-token
with:
Expand Down
2 changes: 1 addition & 1 deletion .github/workflows/publish-alloy-devel.yml
Original file line number Diff line number Diff line change
Expand Up @@ -34,7 +34,7 @@ jobs:
with:
img-name: alloy-devel
dev: true

update_deployment_tools:
name: Update deployment_tools
runs-on: ubuntu-latest
Expand Down
2 changes: 1 addition & 1 deletion .github/workflows/scripts/generate-component-labels.sh
Original file line number Diff line number Diff line change
Expand Up @@ -24,7 +24,7 @@ for README in $(find ./docs/sources/reference/components -name '*.md' ! -name '*
FILENAME=${README##*/}
LABEL_NAME="c/${FILENAME%.*}"
TYPE=$(echo "${FILENAME}" | cut -f1 -d '.' )

if (( "${#LABEL_NAME}" > 50 )); then
echo "'${LABEL_NAME}' exceeds GitHubs 50-character limit on labels, skipping"
continue
Expand Down
8 changes: 4 additions & 4 deletions docs/design/1443-dynamic-pipelines.md
Original file line number Diff line number Diff line change
Expand Up @@ -48,10 +48,10 @@ A `foreach` block may start several sub-pipelines for a `collection` specified i
```alloy
// All components in the sub-pipeline will be scoped under "foreach.default/1/...".
// Here, "1" is sub-pipeline number 1.
// This way component names won't clash with other sub-pipelines from the same foreach,
// This way component names won't clash with other sub-pipelines from the same foreach,
// and with the names of components outside of the foreach.
foreach "default" {

// "collection" is what the for loop will iterate over.
collection = discovery.file.default.targets

Expand All @@ -72,7 +72,7 @@ discovery.file "default" {
files = ["/Users/batman/Desktop/redis_addresses.yaml"]
}

// Every component defined in the "foreach" block will be instantiated for each item in the collection.
// Every component defined in the "foreach" block will be instantiated for each item in the collection.
// The instantiated components will be scoped using the name of the foreach block and the index of the
// item in the collection. For example: /foreach.redis/0/prometheus.exporter.redis.default
foreach "redis" {
Expand Down Expand Up @@ -238,7 +238,7 @@ We should find answers to the unknowns below before this proposal is accepted:
* Will the solution only work for `list()`? What about `map()`?
* If we go with a `foreach`, we could have a `key` attribute in addition to the `var` one. That way we can also access the key. The `key` attribute can be a no-op if `collection` is a map?
* What about debug metrics? Should we aggregate the metrics for all "sub-pipelines"?
* If there is 1 series for each sub-pipeline, the amount of metrics could be huge.
* If there is 1 series for each sub-pipeline, the amount of metrics could be huge.
Some service discovery mechanisms may generate a huge number of elements in a list of targets.
* If we want to aggregate the metrics, how would we do that? Is it even possible to do in within Alloy?
* Can we have a configuration parameter which dictates whether the metrics should be aggregated or not?
Expand Down
6 changes: 3 additions & 3 deletions docs/developer/add-otel-component.md
Original file line number Diff line number Diff line change
Expand Up @@ -192,9 +192,9 @@ You can find [detailed Alloy developer documentation](https://github.com/grafana
The following list provides some examples of OpenTelemetry components added by both Grafana Labs employees and Alloy community members.
These should provide good examples of pull requests that follow the guidelines above, as well as examples of more complex components than the `example` processor above.

* [`otelcol.receiver.filelog`](https://github.com/grafana/alloy/pull/2711)
* [`otelcol.processor.cumulativetodelta`](https://github.com/grafana/alloy/pull/2689)
* [`otelcol.receiver.tcplog`](https://github.com/grafana/alloy/pull/2701)
* [`otelcol.receiver.filelog`](https://github.com/grafana/alloy/pull/2711)
* [`otelcol.processor.cumulativetodelta`](https://github.com/grafana/alloy/pull/2689)
* [`otelcol.receiver.tcplog`](https://github.com/grafana/alloy/pull/2701)
* [`otelcol.receiver.awscloudwatch`](https://github.com/grafana/alloy/pull/2822)

## Example configuration
Expand Down
2 changes: 1 addition & 1 deletion docs/developer/issue-triage.md
Original file line number Diff line number Diff line change
Expand Up @@ -47,7 +47,7 @@ After an effort has been made to triage an issue, the issue should be in one of

The `needs-attention` label is applied to issues that are seen as stale in a GitHub action.
This includes issues that have not been interacted with in 90 days.
Issues with the `needs-attention` label may be closed if they are not in an actionable state.
Issues with the `needs-attention` label may be closed if they are not in an actionable state.

The `keepalive` label can be applied to exempt an issue or pull request from being marked as stale.

Expand Down
4 changes: 2 additions & 2 deletions docs/developer/publishing-dev-container.md
Original file line number Diff line number Diff line change
Expand Up @@ -18,8 +18,8 @@ The image is tagged using this format:
pr-<pr_number>-<version>-devel+<commit_sha>
```

- `<pr_number>`: the PR number that triggered the build
- `<version>`: the Alloy version inferred from the code
- `<pr_number>`: the PR number that triggered the build
- `<version>`: the Alloy version inferred from the code
- `<commit_sha>`: the Git SHA of the commit included in the image

**Example tag:**
Expand Down
6 changes: 3 additions & 3 deletions docs/developer/release/03-tag-release.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,10 +8,10 @@ A tag is required to create GitHub artifacts and as a prerequisite for publishin

2. Make sure you are up to date on the release branch:

```
```
git checkout release/VERSION_PREFIX
git fetch origin
git pull origin
git fetch origin
git pull origin
```

3. Determine the [VERSION](concepts/version.md).
Expand Down
2 changes: 1 addition & 1 deletion docs/developer/updating-otel/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -120,7 +120,7 @@ docker compose up -d
K6 will be configured to send traces on `ENDPOINT=host.docker.internal:4320`.
This means that the local Alloy instance must be configured to accept traces on `0.0.0.0:4320`.

The ["otelcol" components][otelcol-components] are the only components which use OTel.
The ["otelcol" components][otelcol-components] are the only components which use OTel.
Try to test as many of them as possible using a config file like this one:

[otelcol-components](https://grafana.com/docs/alloy/latest/reference/components/otelcol/)
Expand Down
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
services:

k6-tracing:
image: ghcr.io/grafana/xk6-client-tracing:v0.0.5
environment:
Expand Down
2 changes: 1 addition & 1 deletion docs/sources/_index.md
Original file line number Diff line number Diff line change
Expand Up @@ -18,7 +18,7 @@ hero:
width: 110
height: 110
description: >-
Grafana Alloy combines the strengths of the leading collectors into one place. Whether observing applications, infrastructure, or both, Grafana Alloy can collect, process, and export telemetry signals to scale and future-proof your observability approach.
Grafana Alloy combines the strengths of the leading collectors into one place. Whether observing applications, infrastructure, or both, Grafana Alloy can collect, process, and export telemetry signals to scale and future-proof your observability approach.
cards:
title_class: pt-0 lh-1
items:
Expand Down
2 changes: 1 addition & 1 deletion docs/sources/_index.md.t
Original file line number Diff line number Diff line change
Expand Up @@ -18,7 +18,7 @@ hero:
width: 110
height: 110
description: >-
Grafana Alloy combines the strengths of the leading collectors into one place. Whether observing applications, infrastructure, or both, Grafana Alloy can collect, process, and export telemetry signals to scale and future-proof your observability approach.
Grafana Alloy combines the strengths of the leading collectors into one place. Whether observing applications, infrastructure, or both, Grafana Alloy can collect, process, and export telemetry signals to scale and future-proof your observability approach.
cards:
title_class: pt-0 lh-1
items:
Expand Down
10 changes: 5 additions & 5 deletions docs/sources/collect/ecs-opentelemetry-data.md
Original file line number Diff line number Diff line change
Expand Up @@ -52,12 +52,12 @@ Read [`otel-prometheus`][otel-prometheus] to find out how to set the Prometheus
Complete the following steps to create a sample task. Refer to the [ADOT doc][adot-doc] for more information.

1. Create an SSM Parameter Store entry to hold the collector configuration file.

1. Open the AWS Console.
1. In the AWS Console, choose Parameter Store.
1. Choose *Create parameter*.
1. Create a parameter with the following values:

* Name: `collector-config`
* Tier: Standard
* Type: String
Expand All @@ -72,7 +72,7 @@ Complete the following steps to create a sample task. Refer to the [ADOT doc][ad
* `{{ecsExecutionRoleArn}}`: The AWSOTTaskExcutionRole ARN.
* Add an environment variable named AOT_CONFIG_CONTENT.
Select ValueFrom to tell ECS to get the value from the SSM Parameter, and set the value to `collector-config`.

1. Follow the ECS Fargate setup instructions to [create a task definition][task] using the template.

### Configure {{% param "PRODUCT_NAME" %}}
Expand Down Expand Up @@ -112,12 +112,12 @@ This configuration sets up a scrape job for the container metrics and exports th
Complete the following steps to create a sample task.

1. Create an SSM Parameter Store entry to hold the collector configuration file.

1. Open the AWS Console.
1. In the AWS Console, choose Parameter Store.
1. Choose *Create parameter*.
1. Create a parameter with the following values:

* Name: `collector-config`
* Tier: Standard
* Type: String
Expand Down
2 changes: 1 addition & 1 deletion docs/sources/collect/opentelemetry-to-lgtm-stack.md
Original file line number Diff line number Diff line change
Expand Up @@ -122,7 +122,7 @@ You can implement the following pipelines to send your data to Loki, Tempo, and

```plaintext
Metrics: OTLP Receiver → Batch Processor → Prometheus Exporter → Prometheus Remote Write
Logs: OTLP Receiver → Batch Processor → Loki Exporter → Loki Write
Logs: OTLP Receiver → Batch Processor → Loki Exporter → Loki Write
Traces: OTLP Receiver → Batch Processor → OTLP Exporter (gRPC/HTTP)
```

Expand Down
54 changes: 27 additions & 27 deletions docs/sources/monitor/monitor-kubernetes-logs.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,29 +8,29 @@ weight: 600

# Monitor Kubernetes logs with {{% param "FULL_PRODUCT_NAME" %}}

Kubernetes captures logs from each container in a running Pod.
Kubernetes captures logs from each container in a running Pod.
With {{< param "PRODUCT_NAME" >}}, you can collect Kubernetes logs, forward them to a Grafana stack, and create dashboards to monitor your Kubernetes Deployment.

The [`alloy-scenarios`][scenarios] repository contains complete examples of {{< param "PRODUCT_NAME" >}} deployments.
The [`alloy-scenarios`][scenarios] repository contains complete examples of {{< param "PRODUCT_NAME" >}} deployments.
Clone the repository and use the examples to understand how {{< param "PRODUCT_NAME" >}} collects, processes, and exports telemetry signals.

This example scenario uses a Kubernetes Monitoring Helm chart to deploy and monitor Kubernetes logs.
It installs three Helm charts: Loki, Grafana, and {{< param "PRODUCT_NAME" >}}.
This example scenario uses a Kubernetes Monitoring Helm chart to deploy and monitor Kubernetes logs.
It installs three Helm charts: Loki, Grafana, and {{< param "PRODUCT_NAME" >}}.
The Helm chart simplifies configuration and deploys best practices for monitoring Kubernetes clusters.

{{< param "PRODUCT_NAME" >}}, installed with `k8s-monitoring-helm`, collects two log sources: [Pod Logs][] and [Kubernetes Events][].

[Pod Logs]: https://kubernetes.io/docs/concepts/cluster-administration/logging/#basic-logging-in-kubernetes
[Kubernetes Events]: https://kubernetes.io/docs/reference/kubernetes-api/cluster-resources/event-v1/
[Pod Logs]: https://kubernetes.io/docs/concepts/cluster-administration/logging/#basic-logging-in-kubernetes
[Kubernetes Events]: https://kubernetes.io/docs/reference/kubernetes-api/cluster-resources/event-v1/
[scenarios]: https://github.com/grafana/alloy-scenarios/

## Before you begin

Ensure you have the following:

* [Docker](https://www.docker.com/)
* [Git](https://git-scm.com/)
* [Helm](https://helm.sh/docs/intro/install/)
* [Docker](https://www.docker.com/)
* [Git](https://git-scm.com/)
* [Helm](https://helm.sh/docs/intro/install/)
* [kind](https://kind.sigs.k8s.io/docs/user/quick-start/)

## Clone and deploy the example
Expand All @@ -51,7 +51,7 @@ Follow these steps to clone the scenarios repository and deploy the monitoring e
cd alloy-scenarios/k8s-logs
```

1. Create a local Kubernetes cluster using kind.
1. Create a local Kubernetes cluster using kind.
The `kind.yml` file provides the cluster configuration:

```shell
Expand All @@ -71,19 +71,19 @@ Follow these steps to clone the scenarios repository and deploy the monitoring e
kubectl create namespace prod
```

1. Deploy Loki in the `meta` namespace.
Loki stores the collected logs.
1. Deploy Loki in the `meta` namespace.
Loki stores the collected logs.
The `loki-values.yml` file contains the Loki Helm chart configuration:

```shell
helm install --values loki-values.yml loki grafana/loki -n meta
```

This Helm chart installs Loki in monolithic mode.
This Helm chart installs Loki in monolithic mode.
For more details, refer to the [Loki documentation](https://grafana.com/docs/loki/latest/get-started/deployment-modes/).

1. Deploy Grafana in the `meta` namespace.
You can use Grafana to visualize the logs stored in Loki.
1. Deploy Grafana in the `meta` namespace.
You can use Grafana to visualize the logs stored in Loki.
The `grafana-values.yml` file contains the Grafana Helm chart configuration:

```shell
Expand All @@ -92,7 +92,7 @@ Follow these steps to clone the scenarios repository and deploy the monitoring e

This Helm chart installs Grafana and sets the `datasources.datasources.yaml` field to the Loki data source configuration.

1. Deploy {{< param "PRODUCT_NAME" >}} in the `meta` namespace.
1. Deploy {{< param "PRODUCT_NAME" >}} in the `meta` namespace.
The `k8s-monitoring-values.yml` file contains the Kubernetes monitoring Helm chart configuration:

```shell
Expand Down Expand Up @@ -129,7 +129,7 @@ Follow these steps to clone the scenarios repository and deploy the monitoring e
kubectl --namespace meta port-forward $POD_NAME 12345
```

1. Deploy Grafana Tempo to the `prod` namespace.
1. Deploy Grafana Tempo to the `prod` namespace.
Tempo generates logs for this example:

```shell
Expand All @@ -154,12 +154,12 @@ To create a [dashboard](https://grafana.com/docs/grafana/latest/getting-started/

## Understand the Kubernetes Monitoring Helm chart

The Kubernetes Monitoring Helm chart, `k8s-monitoring-helm`, collects, scrapes, and forwards Kubernetes telemetry data to a Grafana stack.
The Kubernetes Monitoring Helm chart, `k8s-monitoring-helm`, collects, scrapes, and forwards Kubernetes telemetry data to a Grafana stack.
This includes metrics, logs, traces, and continuous profiling data.

### `cluster`

Define the cluster name as `meta-monitoring-tutorial`.
Define the cluster name as `meta-monitoring-tutorial`.
This is a static label attached to all logs collected by the Kubernetes Monitoring Helm chart.

```yaml
Expand All @@ -169,7 +169,7 @@ cluster:

### `destinations`

Define a destination named `loki` to forward logs to Loki.
Define a destination named `loki` to forward logs to Loki.
The `url` attribute specifies the URL of the Loki gateway.

```yaml
Expand All @@ -183,7 +183,7 @@ destinations:

Enable the collection of cluster events.

* `collector`: Use the `alloy-logs` collector to collect logs.
* `collector`: Use the `alloy-logs` collector to collect logs.
* `namespaces`: Specify the `meta` and `prod` namespaces to collect logs from.

```yaml
Expand All @@ -197,7 +197,7 @@ clusterEvents:

### `nodeLogs`

Disable the collection of node logs.
Disable the collection of node logs.
Collecting node logs requires mounting `/var/log/journal`, which is out of scope for this example.

```yaml
Expand All @@ -209,9 +209,9 @@ nodeLogs:

Enable the collection of Pod logs.

* `labelsToKeep`: Specify labels to keep when collecting logs.
This configuration removes `pod` from the labels to keep.
* `structuredMetadata`: Specify structured metadata to collect.
* `labelsToKeep`: Specify labels to keep when collecting logs.
This configuration removes `pod` from the labels to keep.
* `structuredMetadata`: Specify structured metadata to collect.
This configuration sets the structured metadata `pod` to keep the Pod name for querying.

```yaml
Expand All @@ -229,8 +229,8 @@ podLogs:

### Define the {{% param "PRODUCT_NAME" %}} role

The Kubernetes Monitoring Helm chart deploys only what you need.
In this case, the configuration deploys {{< param "PRODUCT_NAME" >}} with the capability to collect logs.
The Kubernetes Monitoring Helm chart deploys only what you need.
In this case, the configuration deploys {{< param "PRODUCT_NAME" >}} with the capability to collect logs.
Metrics, traces, and continuous profiling are disabled.

```yaml
Expand Down
6 changes: 3 additions & 3 deletions docs/sources/monitor/monitor-logs-over-tcp.md
Original file line number Diff line number Diff line change
Expand Up @@ -141,9 +141,9 @@ In this example, the component requires the following arguments:
```alloy
loki.process "labels" {
stage.json {
expressions = { "extracted_service" = "service_name",
"extracted_code_line" = "code_line",
"extracted_server" = "server_id",
expressions = { "extracted_service" = "service_name",
"extracted_code_line" = "code_line",
"extracted_server" = "server_id",
}
}

Expand Down
Loading
Loading