Skip to content

Optimize Kebechet runs in a deployment #873

@fridex

Description

@fridex

Is your feature request related to a problem? Please describe.

It looks like we are not effectively utilizing cluster resources for Kebechet. Kebechet is run on each webhook received which might easily flood the whole namespace, especially with active repositories. Let's have a way to limit the number of Kebechet pods for a single repository in a deployment.

Describe the solution you'd like

One of the solutions would be to use messaging if Kafka provides a feature that would limit the number of specific messages (that is probably not possible based on our last tech talk discussion CC @KPostOffice).

Another way to limit the number of Kebechet runs for a single repository once a webhook is sent to user-api is to create a new database record stored in postgres (and associated with GitHub URL) that keeps null or a timestamp when user-api scheduled Kebechet last time.

  1. user-api receives a webhook
  2. user-api checks if there is already a pending request for the given repo in postgres (the timestamp is not null) and the timestamp is less than the specified number of minutes (a new configuration entry in user-api)
    a. if yes, the webhook handling is ignored (kebechet is not run)
    b. if no, continue to step 3
  3. add the current timestamp to the database for the specified repo
  4. schedule kebechet

On Kebechet side - once Kebechet is started, Kebechet marks the given timestamp as null for the repo it handles and starts handling the repository with Kebechet managers.

This way we will ignore any webhooks coming to the system while kebechet messages for repositories are already queued and we know Kebechet will handle repositories in the next run for the specified repos.

  • add timestamp to postgres and have it associated with GitHub URL (installations)
  • user-api is extended with the described logic above that manipulates with the timestamp
  • user-api's configuration is extended with a configuration option (configurable via env vars) that states timeout after which the timestamp stored in the database is invalid
  • user-api exposes the newly created configuration as a metric (and is available in the dashboard)
  • kebechet sets the given timestamp entry in the database to null on startup
    • this can be done as init container using workflow helpers or so (if we do not want thoth-storages in kebechet itself)

Describe alternatives you've considered

Keep the solution as is, but it is not optimal with respect to the resources allocated.

Additional context

The timestamp was chosen to avoid manually adjusting the database if there will be issues (ex. issues with Kafka). If we lose messages or kebechet fails to clear the database entry to null, we will still be able to handle requests after the specified time, configured on user-api.

Metadata

Metadata

Assignees

No one assigned

    Labels

    kind/featureCategorizes issue or PR as related to a new feature.priority/backlogHigher priority than priority/awaiting-more-evidence.sig/devsecopsCategorizes an issue or PR as relevant to SIG DevSecOps.triage/needs-informationIndicates an issue needs more information in order to work on it.

    Type

    No type

    Projects

    Status

    🆕 New

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions