Skip to content

Robot Upgrade: falco chart upgrade from 2.0.17 to 8.0.1#3956

Open
weizhoublue wants to merge 1 commit intomainfrom
upgrade/falco/8.0.1
Open

Robot Upgrade: falco chart upgrade from 2.0.17 to 8.0.1#3956
weizhoublue wants to merge 1 commit intomainfrom
upgrade/falco/8.0.1

Conversation

@weizhoublue
Copy link
Member

I am robot, upgrade: project falco chart upgrade from 2.0.17 to 8.0.1

@gemini-code-assist
Copy link

Summary of Changes

Hello, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request performs a major version upgrade of the Falco Helm chart, moving from 2.0.17 to 8.0.1. This update integrates the latest Falco core (v0.43.0) and introduces new subcharts for enhanced functionality, such as automated response actions via falco-talon and improved Kubernetes metadata collection with k8s-metacollector. It also includes a significant update to the falcosidekick integration, expanding its output capabilities and monitoring features. Users should be aware of several deprecations and breaking changes, particularly concerning gRPC, older eBPF/gVisor drivers, and container metadata collectors, which are detailed in the updated documentation.

Highlights

  • Major Version Upgrade: The Falco Helm chart has been upgraded from version 2.0.17 to 8.0.1, bringing significant updates to Falco itself and its ecosystem components.
  • Falco Core Update: Falco's appVersion has been updated from 0.32.2 to 0.43.0, incorporating numerous new features, performance improvements, and bug fixes from the Falco project.
  • New Subcharts Introduced: Two new subcharts, falco-talon (version 0.3.0) and k8s-metacollector (version 0.1.10), have been added to extend Falco's capabilities for response actions and Kubernetes metadata collection, respectively.
  • Falcosidekick Update: The falcosidekick subchart has been updated to version 0.12.1, including its appVersion to 2.31.1, and introducing new outputs, Prometheus rules, and Grafana dashboards.
  • Deprecations and Breaking Changes: Several components and configurations are now deprecated, including gRPC output/server, Legacy eBPF probe, gVisor engine, and older container metadata collectors. A new BREAKING-CHANGES.md file has been added to detail these changes and guide users through migration.
  • Helm Chart Structure and Documentation Overhaul: The Helm chart's internal structure has been refactored, including updates to helpers, configmap generation, and removal of bundled rules files. The README.gotmpl has been extensively updated to reflect these changes and provide comprehensive documentation.

🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console.

Changelog
  • charts/falco/falco/CHANGELOG.md
    • v8.0.1: Fixed serviceMonitor template to properly merge custom selector labels and serviceAccount unit test to handle empty template output.
    • v8.0.0: Upgraded Falco to v0.43.0, Falcoctl to v0.12.2, container plugin to v0.6.1, and k8smeta plugin to v0.4.1. Added deprecation notices for gRPC output/server, Legacy eBPF probe, and gVisor engine.
    • v7.2.1: Bumped falcoctl to v0.12.1 (fixes issue with state dir configuration through config file).
    • v7.2.0: Added artifact-state-dir volume shared between falcoctl-artifact-install and falcoctl-artifact-follow to maintain artifact state consistency.
  • charts/falco/falco/charts/falco-talon/CHANGELOG.md
    • 0.3.0: Bumped version to v0.3.0 and fixed missing usage of imagePullSecrets.
    • 0.2.3: Added a Grafana dashboard for Prometheus metrics.
    • 0.2.1: Bumped version to v0.2.1 for bug fixes.
    • 0.2.0: Configured pod to not rollout on configmap change, to rollout on secret change, and added config.rulesOverride.
  • charts/falco/falco/charts/falcosidekick/CHANGELOG.md
    • 0.12.1: Fixed Redis customConfig type.
    • 0.12.0: Allowed specifying folder annotation for Grafana dashboards.
    • 0.11.1: Added Splunk output.
    • 0.11.0: Conditionally deployed Loki Grafana dashboard only if Loki is enabled and fixed Loki Grafana filter.
  • charts/falco/falco/charts/k8s-metacollector/CHANGELOG.md
    • v0.1.10: Fixed Grafana dashboards datasources.
    • v0.1.9: Added podLabels.
    • v0.1.8: Bumped application version to 0.1.1.
    • v0.1.7: Lowered initial delay seconds for readiness and liveness probes.
Activity
  • No human activity has been recorded on this pull request yet.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request upgrades the Falco Helm chart from version 2.0.17 to 8.0.1. This is a significant major version bump that introduces many new features, subcharts (falco-talon, k8s-metacollector), and breaking changes. The changes align the chart with the latest Falco developments, such as using falcoctl for managing artifacts. My review focuses on identifying potential bugs, inconsistencies, and documentation errors introduced during this large-scale upgrade. I've found a few issues, including a copy-paste error in a template, and several formatting errors and broken links in the documentation files, which could impact usability and understanding for users.

Comment on lines +57 to +59
create_index_template: {{ .Values.config.notifiers.loki.createIndexTemplate }}
number_of_shards: {{ .Values.config.notifiers.loki.numberOfShards }}
number_of_replicas: {{ .Values.config.notifiers.loki.numberOfReplicas }}

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

There appears to be a copy-paste error in the elasticsearch notifier configuration. It's using values from the loki notifier (.Values.config.notifiers.loki.*) instead of its own configuration path (.Values.config.notifiers.elasticsearch.*). This will prevent the Elasticsearch notifier from being configured correctly.

        create_index_template: {{ .Values.config.notifiers.elasticsearch.createIndexTemplate }}     
        number_of_shards: {{ .Values.config.notifiers.elasticsearch.numberOfShards }}     
        number_of_replicas: {{ .Values.config.notifiers.elasticsearch.numberOfReplicas }}     

Comment on lines +115 to +290
```bash=
helm upgrade falco falcosecurity/falco \
--namespace=falco \
--reuse-values \
--set falcoctl.artifact.install.enabled=false \
--set falcoctl.artifact.follow.enabled=false
```

This way you will upgrade Falco to `v0.34.0`.

**NOTE**: The new version of Falco itself, installed by the chart, does not introduce breaking changes. You can port your previous Falco configuration to the new `values.yaml` by copy-pasting it.

### Falcoctl support

[Falcoctl](https://github.com/falcosecurity/falcoctl) is a new tool born to automatize operations when deploying Falco.

Before the `v3.0.0` of the charts _rulesfiles_ and _plugins_ were shipped bundled in the Falco docker image. It precluded the possibility to update the _rulesfiles_ and _plugins_ until a new version of Falco was released. Operators had to manually update the _rulesfiles_ or add new _plugins_ to Falco. The process was cumbersome and error-prone. Operators had to create their own Falco docker images with the new plugins baked into it or wait for a new Falco release.

Starting from the `v3.0.0` chart release, we add support for **falcoctl** in the charts. By deploying it alongside Falco it allows to:

- _install_ artifacts of the Falco ecosystem (i.e plugins and rules at the moment of writing)
- _follow_ those artifacts(only _rulesfile_ artifacts are recommended), to keep them up-to-date with the latest releases of the Falcosecurity organization. This allows, for instance, to update rules detecting new vulnerabilities or security issues without the need to redeploy Falco.

The chart deploys _falcoctl_ using an _init container_ and/or _sidecar container_. The first one is used to install artifacts and make them available to Falco at start-up time, the latter runs alongside Falco and updates the local artifacts when new updates are detected.

Based on your deployment scenario:

1. Falco without _plugins_ and you just want to upgrade to the new Falco version:
```bash=
helm upgrade falco falcosecurity/falco \
--namespace=falco \
--reuse-values \
--set falcoctl.artifact.install.enabled=false \
--set falcoctl.artifact.follow.enabled=false
```
When upgrading an existing release, _helm_ uses the new chart version. Since we added new template files and changed the values schema(added new parameters) we explicitly disable the **falcoctl** tool. By doing so, the command will reuse the existing configuration but will deploy Falco version `0.34.0`
2. Falco without _plugins_ and you want to automatically get new _falco-rules_ as soon as they are released:

```bash=
helm upgrade falco falcosecurity/falco \
--namespace=falco \
```

Helm first applies the values coming from the new chart version, then overrides them using the values of the previous release. The outcome is a new release of Falco that:
- uses the previous configuration;
- runs Falco version `0.34.0`;
- uses **falcoctl** to install and automatically update the [_falco-rules_](https://github.com/falcosecurity/rules/);
- checks for new updates every 6h (default value).

3. Falco with _plugins_ and you want just to upgrade Falco:
```bash=
helm upgrade falco falcosecurity/falco \
--namespace=falco \
--reuse-values \
--set falcoctl.artifact.install.enabled=false \
--set falcoctl.artifact.follow.enabled=false
```
Very similar to scenario `1.`
4. Falco with plugins and you want to use **falcoctl** to download the plugins' _rulesfiles_:
- Save **falcoctl** configuration to file:

```yaml=
cat << EOF > ./falcoctl-values.yaml
####################
# falcoctl config #
####################
falcoctl:
image:
# -- The image pull policy.
pullPolicy: IfNotPresent
# -- The image registry to pull from.
registry: docker.io
# -- The image repository to pull from.
repository: falcosecurity/falcoctl
# -- Overrides the image tag whose default is the chart appVersion.
tag: "main"
artifact:
# -- Runs "falcoctl artifact install" command as an init container. It is used to install artfacts before
# Falco starts. It provides them to Falco by using an emptyDir volume.
install:
enabled: true
# -- Extra environment variables that will be pass onto falcoctl-artifact-install init container.
env: {}
# -- Arguments to pass to the falcoctl-artifact-install init container.
args: ["--verbose"]
# -- Resources requests and limits for the falcoctl-artifact-install init container.
resources: {}
# -- Security context for the falcoctl init container.
securityContext: {}
# -- Runs "falcoctl artifact follow" command as a sidecar container. It is used to automatically check for
# updates given a list of artifacts. If an update is found it downloads and installs it in a shared folder (emptyDir)
# that is accessible by Falco. Rulesfiles are automatically detected and loaded by Falco once they are installed in the
# correct folder by falcoctl. To prevent new versions of artifacts from breaking Falco, the tool checks if it is compatible
# with the running version of Falco before installing it.
follow:
enabled: true
# -- Extra environment variables that will be pass onto falcoctl-artifact-follow sidecar container.
env: {}
# -- Arguments to pass to the falcoctl-artifact-follow sidecar container.
args: ["--verbose"]
# -- Resources requests and limits for the falcoctl-artifact-follow sidecar container.
resources: {}
# -- Security context for the falcoctl-artifact-follow sidecar container.
securityContext: {}
# -- Configuration file of the falcoctl tool. It is saved in a configmap and mounted on the falcotl containers.
config:
# -- List of indexes that falcoctl downloads and uses to locate and download artiafcts. For more info see:
# https://github.com/falcosecurity/falcoctl/blob/main/proposals/20220916-rules-and-plugin-distribution.md#index-file-overview
indexes:
- name: falcosecurity
url: https://falcosecurity.github.io/falcoctl/index.yaml
# -- Configuration used by the artifact commands.
artifact:

# -- List of artifact types that falcoctl will handle. If the configured refs resolves to an artifact whose type is not contained
# in the list it will refuse to downloade and install that artifact.
allowedTypes:
- rulesfile
install:
# -- Do not resolve the depenencies for artifacts. By default is true, but for our use carse we disable it.
resolveDeps: false
# -- List of artifacts to be installed by the falcoctl init container.
refs: [k8saudit-rules:0.5]
# -- Directory where the *rulesfiles* are saved. The path is relative to the container, which in this case is an emptyDir
# mounted also by the Falco pod.
rulesfilesDir: /rulesfiles
# -- Same as the one above but for the artifacts.
pluginsDir: /plugins
follow:
# -- List of artifacts to be installed by the falcoctl init container.
refs: [k8saudit-rules:0.5]
# -- Directory where the *rulesfiles* are saved. The path is relative to the container, which in this case is an emptyDir
# mounted also by the Falco pod.
rulesfilesDir: /rulesfiles
# -- Same as the one above but for the artifacts.
pluginsDir: /plugins
EOF
```

- Set `falcoctl.artifact.install.enabled=true` to install _rulesfiles_ of the loaded plugins. Configure **falcoctl** to install the _rulesfiles_ of the plugins you are loading with Falco. For example, if you are loading **k8saudit** plugin then you need to set `falcoctl.config.artifact.install.refs=[k8saudit-rules:0.5]`. When Falco is deployed the **falcoctl** init container will download the specified artifacts based on their tag.
- Set `falcoctl.artifact.follow.enabled=true` to keep updated _rulesfiles_ of the loaded plugins.
- Proceed to upgrade your Falco release by running:
```bash=
helm upgrade falco falcosecurity/falco \
--namespace=falco \
--reuse-values \
--values=./falcoctl-values.yaml
```

5. Falco with **multiple sources** enabled (syscalls + plugins):
1. Upgrading Falco to the new version:
```bash=
helm upgrade falco falcosecurity/falco \
--namespace=falco \
--reuse-values \
--set falcoctl.artifact.install.enabled=false \
--set falcoctl.artifact.follow.enabled=false
```
2. Upgrading Falco and leveraging **falcoctl** for rules and plugins. Refer to point 4. for **falcoctl** configuration.

### Rulesfiles

Starting from `v0.3.0`, the chart drops the bundled **rulesfiles**. The previous version was used to create a configmap containing the following **rulesfiles**:

- application_rules.yaml
- aws_cloudtrail_rules.yaml
- falco_rules.local.yaml
- falco_rules.yaml
- k8s_audit_rules.yaml

The reason why we are dropping them is pretty simple, the files are already shipped within the Falco image and do not apport any benefit. On the other hand, we had to manually update those files for each Falco release.

For users out there, do not worry, we have you covered. As said before the **rulesfiles** are already shipped inside
the Falco image. Still, this solution has some drawbacks such as users having to wait for the next releases of Falco
to get the latest version of those **rulesfiles**. Or they could manually update them by using the [custom rules](.
/README.md#loading-custom-rules).

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

This file has several formatting issues that affect readability and correctness:

  1. Incorrect code block syntax: Several code blocks are defined with an extra = (e.g., ```bash=). This should be corrected to ```bash for proper rendering. This occurs on lines 115, 143, 153, 165, 176, 257, and 266.
  2. Broken link: The link to custom rules on lines 289-290 is broken by a newline character. It should be on a single line to render correctly.

Comment on lines +156 to +549
* removed falco-expoter (now deprecated) references from the readme

## v4.21.0

* feat(falco): adding imagePullSecrets at the service account level

## v4.20.1

* correctly mount the volumes based on socket path
* unit tests for container engines socket paths

## v4.20.0

* bump falcoctl to 0.11.0

## v4.19.0

* fix falco version to 0.40.0

## v4.18.0

* update the chart for falco 0.40;
* remove deprecated cli flag `--cri` and use instead the configuration file. More info here: https://github.com/falcosecurity/falco/pull/3329
* use new falco images, for more info see: https://github.com/falcosecurity/falco/issues/3165

## v4.17.2

* update(falco): add ports definition in falco container spec

## v4.17.1

* docs(falco): update README.md to reflect latest driver configuration and correct broken links

## v4.17.0

* update(falco): bump k8saudit version to 0.11

## v4.16.2

* fix(falco): set dnsPolicy to ClusterFirstWithHostNet when gvisor driver is enabled to prevent DNS lookup failures for cluster-internal services

## v4.16.1

* fix(falco/serviceMonitor): set service label selector
* new(falco/tests): add unit tests for serviceMonitor label selector

## v4.16.0

* bump falcosidekick dependency to v0.9.* to match with future versions

## v4.15.1

* fix: change the url for the concurrent queue classes docs

## v4.15.0

* update(falco): bump falco version to 0.39.2 and falcoctl to 0.10.1

## v4.14.2

* fix(falco/readme): use `rules_files` instead of deprecated `rules_file` in README config snippet

## v4.14.1

* fix(falco/dashboard): make pod variable independent of triggered rules. CPU and memory are now visible for each
pod, even when no rules have been triggered for that falco instance.

## v4.14.0

* Bump k8smeta plugin to 0.2.1, see: https://github.com/falcosecurity/plugins/releases/tag/plugins%2Fk8smeta%2Fv0.2.1

## v4.13.0

* Expose new config entries for k8smeta plugin:`verbosity` and `hostProc`.

## v4.12.0

* Set apparmor to `unconfined` (disabled) when `leastPrivileged: true` and (`kind: modern_ebpf` or `kind: ebpf`)

## v4.11.2

* only prints env key if there are env values to be passed on `falcoctl.initContainer` and `falcoctl.sidecar`

## v4.11.1

* add details for the scap drops buffer charts with the dir and drops labels

## v4.11.0

* new(falco): add grafana dashboard for falco

## v4.10.0

* Bump Falco to v0.39.1

## v4.9.1

* feat(falco): add labels and annotations to the metrics service

## v4.9.0

* Bump Falco to v0.39.0
* update(falco): add new configuration entries for Falco
This commit adds new config keys introduces in Falco 0.39.0.
Furthermore, updates the unit tests for the latest changes
in the values.yaml.
* cleanup(falco): remove deprecated falco configuration
This commit removes the "output" config key that has
been deprecated in falco.
* update(falco): mount proc filesystem for plugins
The following PR in libs https://github.com/falcosecurity/libs/pull/1969
introduces a new platform for plugins that requires access to the
proc filesystem.
* fix(falco): update broken link pointing to Falco docs
After the changes made by the following PR to the Falco docs https://github.com/falcosecurity/falco-website/pull/1362
this commit updates a broken link.

## v4.8.3

* The init container, when driver.kind=auto, automatically generates
a new Falco configuration file and selects the appropriate engine
kind based on the environment where Falco is deployed.

With this commit, along with falcoctl PR #630, the Helm charts now
support different driver kinds for Falco instances based on the
specific node they are running on. When driver.kind=auto is set,
each Falco instance dynamically selects the most suitable
driver (e.g., ebpf, kmod, modern_ebpf) for the node.
+-------------------------------------------------------+
| Kubernetes Cluster |
| |
| +-------------------+ +-------------------+ |
| | Node 1 | | Node 2 | |
| | | | | |
| | Falco (ebpf) | | Falco (kmod) | |
| +-------------------+ +-------------------+ |
| |
| +-------------------+ |
| | Node 3 | |
| | | |
| | Falco (modern_ebpf)| |
| +-------------------+ |
+-------------------------------------------------------+

## v4.8.2

* fix(falco): correctly mount host filesystems when driver.kind is auto

When falco runs with kmod/module driver it needs special filesystems
to be mounted from the host such /dev and /sys/module/falco.
This commit ensures that we mount them in the falco container.

Note that, the /sys/module/falco is now mounted as /sys/module since
we do not know which kind of driver will be used. The falco folder
exists under /sys/module only when the kernel module is loaded,
hence it's not possible to use the /sys/module/falco hostpath when driver.kind
is set to auto.

## v4.8.1

* fix(falcosidekick): add support for custom service type for webui redis

## v4.8.0

* Upgrade Falco version to 0.38.2

## v4.7.2

* use rules_files key in the preset values files

## v4.7.1

* fix(falco/config): use rules_files instead of deprecated key rules_file

## v4.7.0

* bump k8smeta plugin to version 0.2.0. The new version, resolves a bug that prevented the plugin
from populating the k8smeta fields. For more info see:
* https://github.com/falcosecurity/plugins/issues/514
* https://github.com/falcosecurity/plugins/pull/517

## v4.6.3

* fix(falco): mount client-certs-volume only if certs.existingClientSecret is defined

## v4.6.2

* bump falcosidekick dependency to v0.8.* to match with future versions

## v4.6.1

* bump falcosidekick dependency to v0.8.2 (fixes bug when using externalRedis in UI)

## v4.6.0

* feat(falco): add support for Falco metrics

## v4.5.2

* bump falcosidekick dependency version to v0.8.0, for falcosidekick 2.29.0

## v4.5.2

* reording scc configuration, making it more robust to plain yaml comparison

## v4.5.1

* falco is now able to reconnect to containerd.socket

## v4.5.0

* bump Falco version to 0.38.1

## v4.4.3

* Added a `labels` field in the controller to provide extra labeling for the daemonset/deployment

## v4.4.2

* fix wrong check in pod template where `existingSecret` was used instead of `existingClientSecret`

## v4.4.1

* bump k8s-metacollector dependency version to v0.1.1. See: https://github.com/falcosecurity/k8s-metacollector/releases

## v4.3.1

* bump falcosidekick dependency version to v0.7.19 install latest version through falco chart

## v4.3.0

* `FALCO_HOSTNAME` and `HOST_ROOT` are now set by default in pods configuration.

## v4.2.6

* bump falcosidekick dependency version to v0.7.17 install latest version through falco chart

## v4.2.5

* fix docs

## v4.2.4

* bump falcosidekick dependency version to v0.7.15 install latest version through falco chart

## v4.2.3

* fix(falco/helpers): adjust formatting to be compatible with older helm versions

## v4.2.2

* fix(falco/README): dead link

## v4.2.1
* fix(falco/README): typos, formatting and broken links

## v4.2.0

* Bump falco to v0.37.1 and falcoctl to v0.7.2

## v4.1.2
* Fix links in output after falco install without sidekick

## v4.1.1

* Update README.md.

## v4.1.0

* Reintroduce the service account.

## v4.0.0
The new chart introduces some breaking changes. For folks upgrading Falco please see the BREAKING-CHANGES.md file.

* Uniform driver names and configuration to the Falco one: https://github.com/falcosecurity/falco/pull/2413;
* Fix usernames and groupnames resolution by mounting the `/etc` filesystem;
* Drop old kubernetes collector related resources;
* Introduce the new k8s-metacollector and k8smeta plugin (experimental);
* Enable the dependency resolver for artifacts in falcoctl since the Falco image does not ship anymore the plugins;
* Bump Falco to 0.37.0;
* Bump falcoctl to 0.7.0.

## v3.8.7

* Upgrade falcosidekick chart to `v0.7.11`.

## v3.8.6

* no changes to the chart itself. Updated README.md and makefile.

## v3.8.5

* Add mTLS cryptographic material load via Helm for Falco

## v3.8.4

* Upgrade Falco to 0.36.2: https://github.com/falcosecurity/falco/releases/tag/0.36.2

## v3.8.3

* Upgrade falcosidekick chart to `v0.7.7`.

## v3.8.2

* Upgrade falcosidekick chart to `v0.7.6`.

## v3.8.1

* noop change just to test the ci

## v3.8.0

* Upgrade Falco to 0.36.1: https://github.com/falcosecurity/falco/releases/tag/0.36.1
* Sync values.yaml with 0.36.1 falco.yaml config file.

## v3.7.1

* Update readme

## v3.7.0

* Upgrade Falco to 0.36. https://github.com/falcosecurity/falco/releases/tag/0.36.0
* Sync values.yaml with upstream falco.yaml config file.
* Upgrade falcoctl to 0.6.2. For more info see the release notes: https://github.com/falcosecurity/falcoctl/releases/tag/v0.6.2

## v3.6.2

* Cleanup wrong files

## v3.6.1

* Upgrade falcosidekick chart to `v0.7.1`.

## v3.6.0

* Add `outputs` field to falco configuration

## v3.5.0

## Major Changes

* Support configuration of revisionHistoryLimit of the deployment

## v3.4.1

* Upgrade falcosidekick chart to `v0.6.3`.

## v3.4.0

* Introduce an ability to use an additional volumeMounts for `falcoctl-artifact-install` and `falcoctl-artifact-follow` containers.

## v3.3.1

* No changes made to the falco chart, only some fixes in the makefile

## v3.3.0
* Upgrade Falco to 0.35.1. For more info see the release notes: https://github.com/falcosecurity/falco/releases/tag/0.35.1
* Upgrade falcoctl to 0.5.1. For more info see the release notes: https://github.com/falcosecurity/falcoctl/releases/tag/v0.5.1
* Introduce least privileged mode in modern ebpf. For more info see: https://falco.org/docs/setup/container/#docker-least-privileged-modern-ebpf

## v3.2.1
* Set falco.http_output.url to empty string in values.yaml file

## v3.2.0
* Upgrade Falco to 0.35.0. For more info see the release notes: https://github.com/falcosecurity/falco/releases/tag/0.35.0
* Sync values.yaml with upstream falco.yaml config file.
* Upgrade falcoctl to 0.5.0. For more info see the release notes: https://github.com/falcosecurity/falcoctl/releases/tag/v0.5.0
* The tag used to install and follow the falco rules is `1`
* The tag used to install and follow the k8saudit rules is `0.6`

## v3.1.5

* Use list as default for env parameter of init and follow containers

## v3.1.4

* Fix typo in values-k8audit file

## v3.1.3

* Updates the grpc-service to use the correct label selector

## v3.1.2

* Bump `falcosidekick` dependency to 0.6.1

## v3.1.1
* Update `k8saudit` section in README.md file.

## v3.1.0
* Upgrade Falco to 0.34.1

## v3.0.0
* Drop support for falcosecuriy/falco image, only the init container approach is supported out of the box;

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

This changelog has a few minor typos and formatting issues that should be addressed for clarity:

  • Typo (line 156): falco-expoter should be falco-exporter.
  • Unnecessary line break (lines 220-221): The description for the dashboard fix is split across two lines.
  • Duplicated section (lines 357, 359): The version v4.5.2 is listed twice.
  • Typo (line 549): falcosecuriy should be falcosecurity.

Comment on lines +174 to +430
To run Falco with the [eBPF probe](http://falco.org/docs/concepts/event-sources/kernel/#legacy-ebpf-probe) you just need to set `driver.kind=ebpf` as shown in the following snippet:

```bash
helm install falco falcosecurity/falco \
--create-namespace \
--namespace falco \
--set driver.kind=ebpf
```

There are other configurations related to the eBPF probe, for more info please check the [values.yaml](./values.yaml) file. After you have made your changes to the configuration file you just need to run:

```bash
helm install falco falcosecurity/falco \
--create-namespace \
--namespace "your-custom-name-space" \
-f "path-to-custom-values.yaml-file"
```

**Modern eBPF probe**

To run Falco with the [modern eBPF probe](https://falco.org/docs/concepts/event-sources/kernel/#modern-ebpf-probe) you just need to set `driver.kind=modern_bpf` as shown in the following snippet:

```bash
helm install falco falcosecurity/falco \
--create-namespace \
--namespace falco \
--set driver.kind=modern_ebpf
```

#### Deployment
In the scenario when Falco is used with **plugins** as data sources, then the best option is to deploy it as a k8s `deployment`. **Plugins** could be of two types, the ones that follow the **push model** or the **pull model**. A plugin that adopts the firs model expects to receive the data from a remote source in a given endpoint. They just expose and endpoint and wait for data to be posted, for example [Kubernetes Audit Events](https://github.com/falcosecurity/plugins/tree/master/plugins/k8saudit) expects the data to be sent by the *k8s api server* when configured in such way. On the other hand other plugins that abide by the **pull model** retrieves the data from a given remote service.
The following points explain why a k8s `deployment` is suitable when deploying Falco with plugins:

* need to be reachable when ingesting logs directly from remote services;
* need only one active replica, otherwise events will be sent/received to/from different Falco instances;


## Uninstalling the Chart

To uninstall a Falco release from your Kubernetes cluster always you helm. It will take care to remove all components deployed by the chart and clean up your environment. The following command will remove a release called `falco` in namespace `falco`;

```bash
helm uninstall falco --namespace falco
```

## Showing logs generated by Falco container
There are many reasons why we would have to inspect the messages emitted by the Falco container. When deployed in Kubernetes the Falco logs can be inspected through:
```bash
kubectl logs -n falco falco-pod-name
```
where `falco-pods-name` is the name of the Falco pod running in your cluster.
The command described above will just display the logs emitted by falco until the moment you run the command. The `-f` flag comes handy when we are doing live testing or debugging and we want to have the Falco logs as soon as they are emitted. The following command:
```bash
kubectl logs -f -n falco falco-pod-name
```
The `-f (--follow)` flag follows the logs and live stream them to your terminal and it is really useful when you are debugging a new rule and want to make sure that the rule is triggered when some actions are performed in the system.

If we need to access logs of a previous Falco run we do that by adding the `-p (--previous)` flag:
```bash
kubectl logs -p -n falco falco-pod-name
```
A scenario when we need the `-p (--previous)` flag is when we have a restart of a Falco pod and want to check what went wrong.

### Enabling real time logs
By default in Falco the output is buffered. When live streaming logs we will notice delays between the logs output (rules triggering) and the event happening.
In order to enable the logs to be emitted without delays you need to set `.Values.tty=true` in [values.yaml](./values.yaml) file.

## K8s-metacollector
Starting from Falco `0.37` the old [k8s-client](https://github.com/falcosecurity/falco/issues/2973) has been removed.
A new component named [k8s-metacollector](https://github.com/falcosecurity/k8s-metacollector) replaces it.
The *k8s-metacollector* is a self-contained module that can be deployed within a Kubernetes cluster to perform the task of gathering metadata
from various Kubernetes resources and subsequently transmitting this collected metadata to designated subscribers.

Kubernetes' resources for which metadata will be collected and sent to Falco:
* pods;
* namespaces;
* deployments;
* replicationcontrollers;
* replicasets;
* services;

### Plugin
Since the *k8s-metacollector* is standalone, deployed in the cluster as a deployment, Falco instances need to connect to the component
in order to retrieve the `metadata`. Here it comes the [k8smeta](https://github.com/falcosecurity/plugins/tree/master/plugins/k8smeta) plugin.
The plugin gathers details about Kubernetes resources from the *k8s-metacollector*. It then stores this information
in tables and provides access to Falco upon request. The plugin specifically acquires data for the node where the
associated Falco instance is deployed, resulting in node-level granularity.

### Exported Fields: Old and New
The old [k8s-client](https://github.com/falcosecurity/falco/issues/2973) used to populate the
[k8s](https://falco.org/docs/reference/rules/supported-fields/#field-class-k8s) fields. The **k8s** field class is still
available in Falco, for compatibility reasons, but most of the fields will return `N/A`. The following fields are still
usable and will return meaningful data when the `container runtime collectors` are enabled:
* k8s.pod.name;
* k8s.pod.id;
* k8s.pod.label;
* k8s.pod.labels;
* k8s.pod.ip;
* k8s.pod.cni.json;
* k8s.pod.namespace.name;

The [k8smeta](https://github.com/falcosecurity/plugins/tree/master/plugins/k8smeta) plugin exports a whole new
[field class]https://github.com/falcosecurity/plugins/tree/master/plugins/k8smeta#supported-fields. Note that the new
`k8smeta.*` fields are usable only when the **k8smeta** plugin is loaded in Falco.

### Enabling the k8s-metacollector
The following command will deploy Falco + k8s-metacollector + k8smeta:
```bash
helm install falco falcosecurity/falco \
--namespace falco \
--create-namespace \
--set collectors.kubernetes.enabled=true
```

## Loading custom rules

Falco ships with a nice default ruleset. It is a good starting point but sooner or later, we are going to need to add custom rules which fit our needs.

So the question is: How can we load custom rules in our Falco deployment?

We are going to create a file that contains custom rules so that we can keep it in a Git repository.

```bash
cat custom-rules.yaml
```

And the file looks like this one:

```yaml
customRules:
rules-traefik.yaml: |-
- macro: traefik_consider_syscalls
condition: (evt.num < 0)

- macro: app_traefik
condition: container and container.image startswith "traefik"

# Restricting listening ports to selected set

- list: traefik_allowed_inbound_ports_tcp
items: [443, 80, 8080]

- rule: Unexpected inbound tcp connection traefik
desc: Detect inbound traffic to traefik using tcp on a port outside of expected set
condition: inbound and evt.rawres >= 0 and not fd.sport in (traefik_allowed_inbound_ports_tcp) and app_traefik
output: Inbound network connection to traefik on unexpected port (command=%proc.cmdline pid=%proc.pid connection=%fd.name sport=%fd.sport user=%user.name %container.info image=%container.image)
priority: NOTICE

# Restricting spawned processes to selected set

- list: traefik_allowed_processes
items: ["traefik"]

- rule: Unexpected spawned process traefik
desc: Detect a process started in a traefik container outside of an expected set
condition: spawned_process and not proc.name in (traefik_allowed_processes) and app_traefik
output: Unexpected process spawned in traefik container (command=%proc.cmdline pid=%proc.pid user=%user.name %container.info image=%container.image)
priority: NOTICE
```

So next step is to use the custom-rules.yaml file for installing the Falco Helm chart.

```bash
helm install falco -f custom-rules.yaml falcosecurity/falco
```

And we will see in our logs something like:

```bash
Tue Jun 5 15:08:57 2018: Loading rules from file /etc/falco/rules.d/rules-traefik.yaml:
```

And this means that our Falco installation has loaded the rules and is ready to help us.

## Kubernetes Audit Log

The Kubernetes Audit Log is now supported via the built-in [k8saudit](https://github.com/falcosecurity/plugins/tree/master/plugins/k8saudit) plugin. It is entirely up to you to set up the [webhook backend](https://kubernetes.io/docs/tasks/debug/debug-cluster/audit/#webhook-backend) of the Kubernetes API server to forward the Audit Log event to the Falco listening port.

The following snippet shows how to deploy Falco with the [k8saudit](https://github.com/falcosecurity/plugins/tree/master/plugins/k8saudit) plugin:
```yaml
# -- Disable the drivers since we want to deploy only the k8saudit plugin.
driver:
enabled: false

# -- Disable the collectors, no syscall events to enrich with metadata.
collectors:
enabled: false

# -- Deploy Falco as a deployment. One instance of Falco is enough. Anyway the number of replicas is configurable.
controller:
kind: deployment
deployment:
# -- Number of replicas when installing Falco using a deployment. Change it if you really know what you are doing.
# For more info check the section on Plugins in the README.md file.
replicas: 1


falcoctl:
artifact:
install:
# -- Enable the init container. We do not recommend installing (or following) plugins for security reasons since they are executable objects.
enabled: true
follow:
# -- Enable the sidecar container. We do not support it yet for plugins. It is used only for rules feed such as k8saudit-rules rules.
enabled: true
config:
artifact:
install:
# -- Resolve the dependencies for artifacts.
resolveDeps: true
# -- List of artifacts to be installed by the falcoctl init container.
# Only rulesfile, the plugin will be installed as a dependency.
refs: [k8saudit-rules:0.5]
follow:
# -- List of artifacts to be followed by the falcoctl sidecar container.
refs: [k8saudit-rules:0.5]

services:
- name: k8saudit-webhook
type: NodePort
ports:
- port: 9765 # See plugin open_params
nodePort: 30007
protocol: TCP

falco:
rules_files:
- /etc/falco/k8s_audit_rules.yaml
- /etc/falco/rules.d
plugins:
- name: k8saudit
library_path: libk8saudit.so
init_config:
""
# maxEventBytes: 1048576
# sslCertificate: /etc/falco/falco.pem
open_params: "http://:9765/k8s-audit"
- name: json
library_path: libjson.so
init_config: ""
# Plugins that Falco will load. Note: the same plugins are installed by the falcoctl-artifact-install init container.
load_plugins: [k8saudit, json]

```
Here is the explanation of the above configuration:
* disable the drivers by setting `driver.enabled=false`;
* disable the collectors by setting `collectors.enabled=false`;
* deploy the Falco using a k8s *deployment* by setting `controller.kind=deployment`;
* make our Falco instance reachable by the `k8s api-server` by configuring a service for it in `services`;
* enable the `falcoctl-artifact-install` init container;
* configure `falcoctl-artifact-install` to install the required plugins;
* disable the `falcoctl-artifact-follow` sidecar container;
* load the correct ruleset for our plugin in `falco.rulesFile`;
* configure the plugins to be loaded, in this case, the `k8saudit` and `json`;
* and finally we add our plugins in the `load_plugins` to be loaded by Falco.

The configuration can be found in the [values-k8saudit.yaml(./values-k8saudit.yaml] file ready to be used:

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

This README has a few issues that could confuse users and affect documentation quality:

  1. Inconsistent driver kind (line 194): The documentation for the modern eBPF probe suggests using driver.kind=modern_bpf, but the recommended name is modern_ebpf. The example should be updated for consistency.
  2. Broken markdown links:
    • On line 276, the link for [field class] is broken due to missing parentheses.
    • On line 430, the link for [values-k8saudit.yaml] is broken due to incorrect syntax.

Correcting these will improve the documentation's clarity and accuracy.


```
helm delete falco-talon -n falco
````

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

There's a minor formatting issue in the markdown. The code block for uninstalling Falco Talon is not rendered correctly due to an extra backtick in the closing fence.

Signed-off-by: robot <robot@example.com>
@github-actions github-actions bot force-pushed the upgrade/falco/8.0.1 branch from 659e9eb to 049a79f Compare February 28, 2026 20:07
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant