Skip to content

TMP: Knative Eventing Tutorial

Bora M. Alper edited this page May 26, 2021 · 2 revisions

This document walks-through a relatively complex Knative eventing setup for introductory purposes.

Scenario

Let's say that we would like to monitor the temperatures of nodes in our infrastructure, and alert if they overheat. Let us assume that nodes update us on their temperature by invoking a gRPC method on our monitor periodically, so events are "pushed" rather than being "pulled" by our monitor; the push model both scales better (down to zero and up to ceiling) and also is more interesting to set up. We would also like to log the temperatures for visualization and further analysis if/when needed.

Design

Knative Eventing has three main delivery methods: (a) direct, (b) channel-subscription, and (c) broker-trigger:

  • The direct method, as its name suggests, involves connecting event producers and consumers directly (i.e., without any intermediates), and such, is the simplest of all and possibly the most useless, except for some very simple cases.
  • The channel-subscription model introduces channels as an intermediate layer for queuing and subscriptions bind subscribers to the channels while providing some delivery services too, such as retries, backing-off, and dead-letter queuing.
  • Finally, the broker-trigger model is very similar to the channel-subscription model, but allows filtering of events by their metadata using triggers such that a single queue (broker) can be used to deliver various events, in contrast to channels that forward all incoming events to all subscribers.

If we come back to our scenario, we can identify the following services:

  1. An ingress server that accepts the gRPC requests from nodes in our infrastructure and generates "temperature-reading" events.
    • We could have directly exposed our channel/broker to our nodes to push temperature data straight into, but that would (a) require the nodes to speak the protocol of our particular channel/broker (e.g. Apache Kafka), creating an accidental dependency; (b) make authentication and rate-limiting more difficult to control; and (c) not allow us to validate the incoming events (e.g. reject temperature readings if not between 0 and 100).
  2. A "temperature-reading" event consumer to (a) determine if the temperature is too high, and (b) record the reading for visualization, further analysis, etc. If the temperature is too high, the consumer should raise an "overheat" event.
  3. An "overheat" event consumer to alert the relevant teams and/or on-site engineers.

A diagram of our design looks as follows, ignoring the extra interactions such as temperature consumer logging readings to a database and overheat consumer alerting engineers:

The diagram of our design.

See Also

Clone this wiki locally