You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: README.md
+28-15Lines changed: 28 additions & 15 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,33 +1,43 @@
1
-
# cf-java-logging-support
1
+
# Java Logging Support for Cloud Foundry
2
2
3
-
#### For the impatient:
3
+
##Summary
4
4
5
-
Version 2.x is a major rewrite of what we had before. While the usage as such should be similar, the most notable change is that you need to
5
+
This is a collection of support libraries for Java applications running on Cloud Foundry that serve two main purposes: It provides (a) means to emit *structured application log messages* and (b) instrument parts of your application stack to *collect request metrics*.
6
6
7
-
* make up your mind which parts you actually need,
8
-
* adjust your Maven dependencies accordingly,
9
-
* pick your favorite implementation, and
10
-
* adjust your logging configuration accordingly.
7
+
When we say structured, we actually mean in JSON format. In that sense, it's shares ideas with [logstash-logback-encoder](https://github.com/logstash/logstash-logback-encoder), but uses a simpler approach as we want to ensure that these structured messages adhere to standardized formats. With such standardized formats in place, it becomes much easier to ingest, process and search such messages in log analytic stacks like, e.g., [ELK](https://www.elastic.co/webinars/introduction-elk-stack).
8
+
9
+
If you're interested in the specifications of these standardized formats, you may want to have closer look at the [beats folder](./cf-java-logging-support/beats).
10
+
11
+
While [logstash-logback-encoder](https://github.com/logstash/logstash-logback-encoder) is tied to [logback](http://logback.qos.ch/), we've tried to stay implementation neutral and have implemented the core functionality on top of [slf4j](http://www.slf4j.org/), but provide implementations for both [logback](http://logback.qos.ch/) and [log4j2](http://logging.apache.org/log4j/2.x/) (and we're open to contributions that would support other implementations).
12
+
13
+
The instrumentation part is currently focusing on providing [request filters for Java Servlets](http://www.oracle.com/technetwork/java/filters-137243.html) and [client and server filters for Jersey](https://jersey.java.net/documentation/latest/filters-and-interceptors.html), but again, we're open to contributions for other APIs and frameworks.
11
14
12
-
While V1 provided the one-stop super jar with baked-in, implementation specific log configuration, this is now torn apart into pieces. While we still have dependencies to other stuff, we now tag almost everything as *provided*, i.e. it's your duty to get the dependency management right in your application POM.
15
+
## Features and dependencies
13
16
14
-
##### Features and dependencies
17
+
As you can see from the structure of this repository, we're not providing one *uber* JAR that contains everything, but provide each feature separately. We also try to stay away from wiring up too many dependencies by tagging almost all of them as *provided*, i.e. it's your duty to get all runtime dependencies resolved in your application POM file.
18
+
19
+
All in all, you should do the following:
20
+
21
+
* make up your mind which features you actually need,
22
+
* adjust your Maven dependencies accordingly,
23
+
* pick your favorite logging implementation, and
24
+
* adjust your logging configuration accordingly.
15
25
16
-
Say, you want to make use the *servlet filter* feature, then you need to add the following dependency to your POM:
26
+
Say, you want to make use of the *servlet filter* feature, then you need to add the following dependency to your POM:
This feature only depends on the servlet API which you have included in your POM anyhow.
27
37
28
-
#####Implementation variants and logging configurations
38
+
## Implementation variants and logging configurations
29
39
30
-
The *core* feature (on which all other features rely) is just using the `org.slf4j` API, but to actually get logs written, you need to pick an implementation feature. We now have two implementations
40
+
The *core* feature (on which all other features rely) is just using the `org.slf4j` API, but to actually get logs written, you need to pick an implementation feature. As stated above, we have two implementations
31
41
32
42
*`cf-java-logging-support-logback` based on [logback](http://logback.qos.ch/), and
33
43
*`cf-java-logging-support-log4j2` based on [log4j2](http://logging.apache.org/log4j/2.x/).
@@ -81,6 +91,7 @@ Here are sort of the minimal configurations you'd need:
Copy file name to clipboardExpand all lines: cf-java-logging-support-core/beats/README.md
+8-5Lines changed: 8 additions & 5 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,12 +1,15 @@
1
-
# Beat Specifications for Application Logs and Request Statistics
1
+
# Beat Specifications for Application Logs and Request Metrics
2
2
3
3
The two subfolders contain the YAML specifications for
4
4
5
-
* application log, and
6
-
* request (performance) statistics
5
+
* application logs, and
6
+
* request (performance) metrics
7
7
8
8
["beats"](https://github.com/elastic/beats).
9
9
10
-
These specifications capture which fields should ultimately show up in ElasticSearch once they've travelled through our processing pipeline. Ideally, the entity emitting the application or request log will already put things into a proper shape, but most likely the parser component (aka Logstash) will use rule sets to do a proper transformation.
10
+
Although we can't use the corresponding software libraries (as we're talking about logging in Java, not in Go), having specifications written in that format still are of help if you consider ingesting such messages in ElasticSearch, as we do. Actually, we do automatically generate the Java field names that we use to store the data from those specification files.
11
+
12
+
These specifications capture which fields should ultimately show up in ElasticSearch once they've travelled through a typical ELK processing pipeline. Ideally, the entity emitting the application log or request metrics will already put things into a proper shape, but most likely the parser component ( Logstash) will use rule sets to perform the necessary additions and/or transformations.
13
+
14
+
11
15
12
-
Note that we also use the specifications to generate the Java _field names_ for our [Java logging support library](https://github.wdf.sap.corp/perfx/java-logging-support).
0 commit comments