-
Notifications
You must be signed in to change notification settings - Fork 702
Demonstrate zap logger wrapped core concept #6964
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Demonstrate zap logger wrapped core concept #6964
Conversation
Codecov ReportAttention: Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## main #6964 +/- ##
=======================================
- Coverage 75.6% 75.5% -0.1%
=======================================
Files 207 207
Lines 19354 19407 +53
=======================================
+ Hits 14643 14665 +22
- Misses 4275 4302 +27
- Partials 436 440 +4
🚀 New features to boost your workflow:
|
|
Here are existing alternatives: |
The Custom zapcore.Core solution is essentially the same thing I propose here. It more closely mirrors the concept of a appender that the otel log API was designed to support. Why not provide a built in utility so this pattern is easily accessible? |
Based on the feedback we got so far, users are not interested in it and prefer the current design which offers more diverse setups. PS. Notice that even the issue author decided not to use the custom zapcore.Core solution. |
Well the user was given the option between configuring code or providing their own core implementation, so of course they choose the config option. The point of this rearrangement is that it shifts the config story to zap and away from the OTEL SDK. Go's requirements around multiple independent processor pipelines, filtering / sampling, and processor level severity configuration are a function of go log appenders designed to act as a full implementation of the log API. As discussed, the otel log API / SDK were not designed to replace existing log frameworks as a feature-rich logging library.
|
It does not shift the config to zap but to one of the zapcore.Core implementations. The users do not have to rely on other zapcore.Core implementations. Users may choose to emit logs only via OTLP. This is a more performant way of emitting logs. |
Yes, and most of the time that will be the zapcore.Core implementation that is published from the same repo as the zap API.
People won't want to do this. Sending to a network location will (almost) always be supplementary to logging to the stdout / console. You're always going to need an escape hatch to see what's happening if the network exporter is failing. |
It also follows the OTel Logging Specification:
We received no feedback for the users that supports your statement. So far we received only positive feedback about the Go Logs API, SDK, bridges design (e.g. in person during KubeCon EU 2024 Paris). Since Aug 23, 2024 (the initial release of PS. People can always use the STDOUT Exporter in case they have some problems e.g, with the network. |
The design of the OpenTelemetry API and SDK built on top of years of iteration and feedback supports my statement. The whole design is built on top of the assumption that there is a set of popular log APIs with implementations that have rich feature sets that would be difficult / wasteful to recreate / compete with. What about all the users that wanted multiple independent logging pipelines, with the filtering and severity configuration options? These types of requests reflect a user base that wants to: 1. continue using existing log API and configuration to route logs to stdout 2. bolt on the additional capability to export logs to a OTLP network location. You don't ask for multiple independent pipelines if you only care about exporting to an OTLP network location.
So only exporting logs to a network location? No local logging to stdout or filesystem? If so, this is a bad idea and not a pattern we should promote. Imagine being the person responsible for maintaining such an application when an inevitable network issue occurs! 😬
Yes, but with a very opinionated encoding and basically no config options. What if they prefer a different encoding (i.e. not the verbose protobuf JSON, or different formatting of timestamp)? Or want to logs to files with periodic rotation? Or want to export to a different network location, like kafka or DB? Mature log frameworks already have strong configuration stories for all of these features. Why reinvent the wheel by re-implementing in OpenTelemetry? |
The assumption seems wrong for Go. You are actually getting feedback in opposition with your statement.
Go logging libraries that are considered mature such as slog and zap do not have "strong configuration stories".More: open-telemetry/opentelemetry-specification#3917 (comment) I thank that it is important to share that the slog bridge was even reviewed by the author of slog: #5138 (comment) |
I think this is an orthogonal problem that should still have a solution in the SDK and relying on stdout may also not be an appropriate solution. Here is an example how it could be solved on the SDK level: open-telemetry/opentelemetry-specification#3645. Currently the users can also choose to use an OTLP file exporter. |
But I don't see a technical reason for go's divergence. I was initially under the impression that something about the go language design or log ecosystem necessitated re-implementing the entire log API, but with this draft PR, I now see that its a design decision. So I ask why make a design decision that diverges from the intended design of the OpenTelemetry log API / SDK? Sure you've met user requirements by defining new concepts like filtering processor and LogRecordProcessor#enabled, but these aren't necessary if go log appenders don't implement the entire log API. The reason this all matters is consistency across languages. OpenTelemetry is a better project if logs feel familiar regardless of which langauge implementation you're using. Imagine this through the lens of declarative config. We want to get to a place where you can take the same SDK config and plug it into all your apps written in different languages. This is a strong user story. Log SDK config looks about the same for almost all languages, with a batch exporter configured to export to an OTLP network location, but for go, the config needs to look quite different since if you plug in this simple config to go, you'll only be logging to a network location which is almost always the wrong thing.
Its not a problem at all if a log appender is not responsible for being a full implementation of a log API. |
You were under the correct impression.
The fact that something is possible does not mean that it is idiomatic in Go (or idiomatic usage of the Go logging libraries). Similarly for Rust's Notice also that with recent changes in OTel Spec it is allowed to use Logs API directly (without using any log appender/bridge). This was needed e.g. to allow instrumentation libraries to emit log records (or event records) without requiring to depend on any concrete logging library. With this it is necessary to add fundamental logging features to the Logs SDK. Languages where adding a
This is not true. For instance, filtering processors can be used e.g. to set different log levels for different exporters (we call it "minimum severity processor") or to add a "trace based sampling" filtering processor. |
Just as it was possible to wrap a zapcore.Core, it appears possible to wrap a slog.Handler.
You obviously have the authority on what constitutes idiomatic in go, but I've read the docs and am struggling to find guidance that expresses a preference to implement the entire backend (zapcore.Core, slog.Handler, etc) vs. wrapping. In java, we have a fractured log ecosystem, with major log APIs including: JUL (java.util.logging), log4j, SLF4J, logback. All of these log APIs want to have a good interopt story with alternative log APIs, so they publish bridges which implement competing APIs, purely at the API level. The idea is a user selects a log API and backend for their app (say logback), installs bridges such that logs from JUL, log4j, and SLF4J all get routed to the logback API. Then they configure the logback implementation (i.e. backend in go vocabulary) to configure how the logs from all the APIs are handled. This is analogous to the otel go log story. The otel go log appenders bridge slog, zap, etc to otel at the API level. Users install these bridges, and configure the otel log SDK (i.e. backend) to configure how the logs from all the APIs are handled. We could have taken that direction in otel java, and bridged JUL, log4j, SLF4J, logback to otel log API at the API level. There is lots of prior art in java for bridging log APIs at the API level such that we would have been able to make a strong case that doing so is idiomatic in java. But again, the otel log API / SDK were designed for bridging at the backend level, not at the API level. The idea was always to allow users to bolt on OTLP logs to whatever log tools they were already using. And again, the benefit of bridging at the backend level and not the API level is that it means the otel log SDK doesn't have to be fully featured.
What's not true? The claim that I make is "you've met user requirements by defining new concepts like filtering processor and LogRecordProcessor#enabled, but these aren't necessary if go log appenders don't implement the entire log API". It is true that we don't need SDK features like filtering processor or LogRecordProcessor#enabled if otel log SDK isn't trying to be a fully featured log backend. Those are the types of features that normally come from the log API backend. If bridging at the backend level, all an otel log SDK really needs to do is buffer logs and send to an OTLP network location. And we most likely design them differently if otel log SDK doesn't need to be a fully featured log backend. Specifically, we would likely have filtering by logger and severity at the LoggerProvider instead of at the LogRecordProcessor level. Configuration at the LogRecordProcessor level is less ergonomic than at the LoggerProvider level, and only makes sense if you are prioritizing multiple independent processing pipelines. Which again, is not a priority if we're not trying to build a fully featured log backend. |
From my previous comment:
Thanks for your feedback 🙏 |
@pellared and I were having a discussion recently which highlighted some differences between log appender implementations in go, and other languages like java, .net, etc.
The comment I linked to spells out the differences, but at a high level, go log appenders implement the entire logger API, where a typical java log appender implementation is plugged into the built-in logger implementation as an extension that is only responsible for mapping a log to the otel log API.
The otel log API wasn't designed to support bridges which act as a full implementation of a log API. This is not to say that it cannot be evolved to meet the requirements, but I want to explore if there is a simpler option. Can go log appenders be written to more closely mirror the appender model in other languages like java?
This draft PR sketches out that idea for zap:
zapcore.CorecalledWrappedCore, which requires the following parameters:delegate zapcore.Core- a delegate core instance with the user's existing log configprovider log.LoggerProvider- an otel log provider, with the user's otel log configuration. Typically, this will be a batch log processor paired with an otlp exporter.Wrappedcore delegates to the thedelegatefor all methods. However, whenCheckis called, it registers aCheckWriteHook, which is invoked after the log entry is written. The hook is responsible for bridging the log to the otel logger provider.Note: I'm not a go developer and this is only meant to sketch out an idea. If the go SIG likes this direction, someone would have to pick it up and run with it.