You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
* Configured Elasticsearch to work with SSL
* Disable Xpack on Kibana and Ingestor nodes
* Implement SSL OPS file
* Unlink elasticsearch_config job from remote ES cluster and run it against colocated one
* Unbound upload-kibana-objects from ES remote cluster
* Fix scale-to-one-az ops file
* Unbound curator from remote ES cluster and make it use colocated one
* Move ls-router to separate OPS file
* Disable post-start across all instances
* Change dn
* Disable post-start on Kibana also
* Put admin cert to data node
* Re-organize post-start
* Add README
* Split ssl/tls
* Upload blobs
* Fixup upon review
Copy file name to clipboardExpand all lines: README.md
+19-31Lines changed: 19 additions & 31 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,32 +1,35 @@
1
1
# Logsearch
2
2
3
-
A scalable stack of [Elasticsearch](http://www.elasticsearch.org/overview/elasticsearch/),
4
-
[Logstash](http://www.elasticsearch.org/overview/logstash/), and
5
-
[Kibana](http://www.elasticsearch.org/overview/kibana/) for your
6
-
own [BOSH](http://docs.cloudfoundry.org/bosh/)-managed infrastructure.
3
+
A scalable stack of [Elasticsearch](https://www.elastic.co/elasticsearch), [Logstash](https://www.elastic.co/logstash), and [Kibana](https://www.elastic.co/kibana) for your own [BOSH](https://bosh.io/docs)-managed infrastructure.
4
+
5
+

7
6
8
7
## BREAKING CHANGES
9
8
10
-
Logsearch < v23.0.0 was based on Elasticsearch 1.x and Kibana 3.
9
+
### Logsearch v211 is based on Elastic stack version 7
10
+
In v211.1.0 basic cluster security features were implemented using [Securiry](https://opendistro.github.io/for-elasticsearch-docs/docs/install/plugins/) plugin from OpenDistro Elasticsearch implementation. For better handling of these features, a following changes was made:
11
11
12
-
Logsearch > v200 is based on Elasticsearch 2.x and Kibana 4.
12
+
- Additional Elasticsearch job has been colocated on **Maintenance** instance. This allows secure communication over localhost for all singletons also colocated there (all singletons have been unlinked from any remote Elasticsearch cluster, and bound to local one).
13
+
- Since using of Ls-rounter instance is not mandatory - it was moved to separate [ops-file](deployment/operations/enable-router.yml).
14
+
- Secure Elasticsearch node-to-node communication has been implemented using [enable-tls](deployment/operations/enable-tls.yml) ops-file.
15
+
- Secure ingesting logs is implemented using [enable-ssl](deployment/operations/enable-ssl.yml) ops-file.
13
16
14
-
- There is NO upgrade path from Elasticsearch 1.x to 2.x. Sorry :(
17
+
### Logsearch v210 is based on Elastic stack version 6
15
18
16
-
Logsearch > v204.0.0 is based on Elastic stack version 5.
19
+
- Elasticsearch 6.x can use indices created in Elasticsearch 5.x, but not those created in Elasticsearch 2.x or before.
20
+
-**Important**: After upgrading running 5.x cluster to 6.x all existing indicies will be available for reading data. However, writing to these indicies is not possible. In order to write data immediatelly after upgrade you have to [change index naming convention](https://github.com/cloudfoundry-community/logsearch-boshrelease/commit/2f83b41ee14dbe3141e21cc0c40df340d50e0169). As long as index names are usually based on current date, this change can be safely reverted in a day or so.
17
21
22
+
### Logsearch v204 is based on Elastic stack version 5.
18
23
- For upgrade procedure from Elasticsearch 2.x please refer to [v205.0.0 release notes](https://github.com/cloudfoundry-community/logsearch-boshrelease/releases/tag/v205.0.0#component-updates).
19
24
20
-
Logsearch > v210.0.0 is based on Elastic stack version 6.
21
-
22
-
- Elasticsearch 6.x can use indices created in Elasticsearch 5.x, but not those created in Elasticsearch 2.x or before.
23
-
-**Important**: After upgrading running 5.x cluster to 6.x all existing indicies will be available for reading data. However, writing to these indicies is not possible. In order to write data immediatelly after upgrade you have to [change index naming convention](https://github.com/cloudfoundry-community/logsearch-boshrelease/commit/2f83b41ee14dbe3141e21cc0c40df340d50e0169). As long as index names are usually based on current date, this change can be safely reverted in a day or so.
25
+
### Logsearch v200 is based on Elasticsearch 2.x and Kibana 4.
26
+
- There is NO upgrade path from Elasticsearch 1.x to 2.x. Sorry :(
24
27
28
+
### Logsearch < v23 was based on Elasticsearch 1.x and Kibana 3.
25
29
26
30
## Getting Started
27
31
28
-
This repo contains Logsearch Core; which deploys an ELK cluster that can receive and parse logs via syslog
29
-
that contain JSON.
32
+
This repo contains Logsearch Core; which deploys an ELK cluster that can receive and parse logs via syslog that contain JSON.
30
33
31
34
Most users will want to combine Logsearch Core with a Logsearch Addon to customise their cluster for a
32
35
particular type of logs. Its likely you want to be following an Addon installation guides - see below
@@ -36,7 +39,7 @@ for a list of the common Addons:
36
39
37
40
38
41
## Installing Logsearch Core
39
-
42
+
40
43
Before starting deployment, make sure your BOSH environment is ready, and all `BOSH_` evironment variables are set. We suggest you to use [BBL](https://github.com/cloudfoundry/bosh-bootloader) tool to spin up the BOSH environment.
41
44
42
45
```
@@ -45,7 +48,7 @@ $ bosh -d logsearch deploy logsearch-deployment.yml
45
48
```
46
49
## Common customisations:
47
50
48
-
0.Adding new parsing rules:
51
+
Adding new parsing rules:
49
52
50
53
logstash_parser:
51
54
filters: |
@@ -63,21 +66,6 @@ $ bosh -d logsearch deploy logsearch-deployment.yml
63
66
64
67
## Known issues
65
68
66
-
#### VMs lose connectivity to each other after VM recreation (eg. instance type upgrade)
67
-
68
-
While this issue is not specific to this boshrelease, it is worth noting.
69
-
70
-
On certain IAAS'es, (AWS confirmed), the bosh-agent fails to flush the ARP cache of the VMs in the deployment which, in rare cases, results in VMs not being able to communicate with each other after some of them has been recreated. The symptoms of when this happens are varied depending on the affected VMs. It could be anything from HAproxy reporting it couldn't find any backends (eg. Kibana) or the parsers failing to connect to the queue.
71
-
72
-
To prevent stale ARP entries, set the `director.flush_arp` property of your BOSH deployment to `true`.
73
-
74
-
The issue, if occurs, should fix itself as the kernel updates incomplete ARP entries, which **should** happen within minutes
75
-
76
-
This can also be done manually if an immediate manual fix is preferred. This should be done on the VMs that are trying to talk to the VM that has been recreated.
0 commit comments