You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs/elassandra/source/enterprise.rst
+54-28Lines changed: 54 additions & 28 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,3 +1,4 @@
1
+
1
2
Enterprise
2
3
==========
3
4
@@ -772,8 +773,8 @@ Once authentication is enabled, create a new Cassandra superuser to avoid issue
772
773
773
774
.. code::
774
775
775
-
cqlsh> CREATE ROLE admin WITH PASSWORD='******' AND LOGIN=true AND SUPERUSER=true;
776
-
cqlsh> ALTER ROLE cassandra WITH PASSWORD='******';
776
+
CREATE ROLE admin WITH PASSWORD='******' AND LOGIN=true AND SUPERUSER=true;
777
+
ALTER ROLE cassandra WITH PASSWORD='******';
777
778
778
779
Then configure the replication factor for the *system_auth* keyspace according to your cluster configuration (see `Configure Native Authentication <https://docs.datastax.com/en/cassandra/3.0/cassandra/configuration/secureConfigNativeAuth.html>`_).
779
780
Finally, adjust roles and credential cache settings and disable JMX configuration of authentifcation and authorization cache.
@@ -843,7 +844,7 @@ Privileges are defined in the Cassandra table ``elastic_admin.privileges``.
843
844
844
845
.. IMPORTANT::
845
846
846
-
* Cassandra superuser have full access to Elasticsearch.
847
+
* Cassandra roles with *superuser* = **true** have full access to Elasticsearch.
847
848
* All cluster-level access should be granted using privileges.
848
849
* Content-Based Security should be used with read-only accounts.
849
850
@@ -855,8 +856,8 @@ Cassandra permission associated to a role are `granted <https://docs.datastax.co
855
856
856
857
.. code::
857
858
858
-
cassandra@cqlsh> GRANT SELECT ON KEYSPACE sales TO sales;
@@ -971,9 +972,9 @@ Strapdata provides a SSL transport client to work with a secured Elassandra clus
971
972
972
973
.. code ::
973
974
974
-
cassandra@cqlsh> CREATE ROLE monitor WITH PASSWORD = 'monitor' AND LOGIN = true;
975
-
cassandra@cqlsh> INSERT INTO elastic_admin.privileges (role, actions,indices) VALUES('monitor','cluster:monitor/state','.*');
976
-
cassandra@cqlsh> INSERT INTO elastic_admin.privileges (role, actions,indices) VALUES('monitor','cluster:monitor/nodes/liveness','.*');
975
+
CREATE ROLE monitor WITH PASSWORD = 'monitor' AND LOGIN = true;
976
+
INSERT INTO elastic_admin.privileges (role, actions,indices) VALUES('monitor','cluster:monitor/state','.*');
977
+
INSERT INTO elastic_admin.privileges (role, actions,indices) VALUES('monitor','cluster:monitor/nodes/liveness','.*');
977
978
978
979
#. Add an **Authorization** header to your client containing your based-64 encoded login and password. This account must have
979
980
appropriate `Cassandra permissions <https://docs.datastax.com/en/cql/3.3/cql/cql_using/useSecurePermission.html>`_ or privileges in the ``elastic_admin.privileges`` table.
@@ -1038,13 +1039,13 @@ Kibana needs a dedicated kibana account to manage kibana configuration, with the
1038
1039
1039
1040
.. code::
1040
1041
1041
-
cassandra@cqlsh> CREATE ROLE kibana WITH PASSWORD = '*****' AND LOGIN = true;
1042
-
cassandra@cqlsh> CREATE KEYSPACE "_kibana" WITH replication = {'class': 'NetworkTopologyStrategy', 'DC1':'1'};
1043
-
cassandra@cqlsh> GRANT CREATE ON KEYSPACE "_kibana" TO kibana;
1044
-
cassandra@cqlsh> GRANT ALTER ON KEYSPACE "_kibana" TO kibana;
1045
-
cassandra@cqlsh> GRANT SELECT ON KEYSPACE "_kibana" TO kibana;
1046
-
cassandra@cqlsh> GRANT MODIFY ON KEYSPACE "_kibana" TO kibana;
1047
-
cassandra@cqlsh> LIST ALL PERMISSIONS OF kibana;
1042
+
CREATE ROLE kibana WITH PASSWORD = '*****' AND LOGIN = true;
1043
+
CREATE KEYSPACE "_kibana" WITH replication = {'class': 'NetworkTopologyStrategy', 'DC1':'1'};
* the SELECT permission on vizualized indices, especially on your default kibana index.
1070
1071
* the SELECT permission on the kibana keyspace to read kibana configuration.
@@ -1073,12 +1074,37 @@ Finally, user accounts must have :
1073
1074
.. TIP::
1074
1075
1075
1076
Once a user if authenticated by kibana, kibana keeps this information. In order to logout from your browser, clear cookies and data associated to your kibana server.
1077
+
1078
+
Kibana and Content-Based Security
1079
+
.................................
1080
+
1081
+
As explain in the `cassandra documentation <http://cassandra.apache.org/doc/latest/cql/security.html#database-roles>`_, you can grant a role to another role and create a hierarchy of roles.
1082
+
Then you can gives some elasticsearch privileges to a base role inherited by some user roles allowed to login, and specify a query filter or field-level filter to this base role.
1083
+
1084
+
In the following example, the base role *group_a* have read access to index *my_index* with a document-level filter defined by a term query.
1085
+
Then the user role *bob* (allowed to log in) inherits of the privileges from the base role *group_a* to read the kibana configuration and the index *my_index* only for documents where *category* is *A*.
1086
+
1087
+
.. code::
1088
+
1089
+
REVOKE SELECT ON KEYSPACE my_index FROM kibana;
1090
+
CREATE ROLE group_a WITH LOGIN = false;
1091
+
GRANT SELECT ON KEYSPACE "_kibana" to group_a;
1092
+
INSERT INTO elastic_admin.privileges (role, actions, indices, query) VALUES('group_a','indices:data/read/.*','my_index', '{ "term" : { "category" : "A" }}');
1093
+
CREATE ROLE bob WITH PASSWORD = 'bob' AND LOGIN = true;
1094
+
GRANT group_a TO bob;
1095
+
1096
+
Don't forget to refresh the privileges cache by issuing the following command :
1097
+
1098
+
.. code::
1099
+
1100
+
POST /_aaa_clear_privilege_cache
1101
+
1076
1102
1077
1103
Elasticsearch Spark connector
1078
1104
.............................
1079
1105
1080
-
The `elasticsearch-hadoop <https://github.com/strapdata/elasticsearch-hadoop/>`_ connector can access a secured Elassandra cluster by providing the
1081
-
sames SSL/TLS and Username/Pasword authentication parameters as the orginal `elasticsearch-hadoop security <https://www.elastic.co/guide/en/elasticsearch/hadoop/current/security.html>`_ connector.
1106
+
The `elasticsearch-hadoop <https://github.com/strapdata/elasticsearch-hadoop>`_ connector can access a secured Elassandra cluster by providing the
1107
+
sames SSL/TLS and Username/Pasword authentication parameters as the orginal `elasticsearch-hadoop <https://www.elastic.co/guide/en/elasticsearch/hadoop/current/security.html>`_ connector.
1082
1108
Here is an example with a spark-shell.
1083
1109
1084
1110
.. code::
@@ -1098,19 +1124,19 @@ The *spark* role have no cassandra permission, but user *john* inherits its priv
1098
1124
1099
1125
.. code::
1100
1126
1101
-
cassandra@cqlsh> CREATE ROLE spark;
1102
-
cassandra@cqlsh> INSERT INTO elastic_admin.privileges (role,actions,indices) VALUES ('spark','cluster:monitor/.*','.*');
1103
-
cassandra@cqlsh> INSERT INTO elastic_admin.privileges (role,actions,indices) VALUES ('spark','indices:admin/shards/search_shards','.*');
1104
-
cassandra@cqlsh> SELECT * FROM elastic_admin.privileges WHERE role='spark';
1127
+
CREATE ROLE spark;
1128
+
INSERT INTO elastic_admin.privileges (role,actions,indices) VALUES ('spark','cluster:monitor/.*','.*');
1129
+
INSERT INTO elastic_admin.privileges (role,actions,indices) VALUES ('spark','indices:admin/shards/search_shards','.*');
1130
+
SELECT * FROM elastic_admin.privileges WHERE role='spark';
Copy file name to clipboardExpand all lines: docs/elassandra/source/integration.rst
+2-2Lines changed: 2 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -126,8 +126,8 @@ The `Elasticsearch JDBC driver <https://github.com/Anchormen/sql4es>`_. can be u
126
126
Running Spark with Elassandra
127
127
-----------------------------
128
128
129
-
A modified version of the `elasticsearch-hadoop <https://github.com/elastic/elasticsearch-hadoop>`_ connector is available for elassandra at `https://github.com/strapdata/elasticsearch-hadoop`_.
130
-
This connector works with spark as describe in the elasticsearch documentation available at `https://www.elastic.co/guide/en/elasticsearch/hadoop/current/index.html`_.
129
+
For elassandra 5.5, a modified version of the `elasticsearch-hadoop <https://github.com/elastic/elasticsearch-hadoop>`_ connector is available for elassandra on the `strapdata repository <https://github.com/strapdata/elasticsearch-hadoop>`_.
130
+
This connector works with spark as describe in the elasticsearch documentation available at `elasticsearch/hadoop <https://www.elastic.co/guide/en/elasticsearch/hadoop/current/index.html>`_.
131
131
132
132
For example, in order to submit a spark job in client mode.
Copy file name to clipboardExpand all lines: docs/elassandra/source/operations.rst
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -47,7 +47,7 @@ By default, when using the Elasticsearch API to replace a document by a new one,
47
47
Elassandra insert a row corresponding to the new document including null for unset fields.
48
48
Without these null (cell tombstones), old fields not present in the new document would be kept at the Cassandra level as zombie cells.
49
49
50
-
Moreover, indexing with ``op_type=create`` (See `Elasticsearch indexing `<https://www.elastic.co/guide/en/elasticsearch/reference/current/docs-index_.html#operation-type>`_ ) require a Cassandra PAXOS transaction
50
+
Moreover, indexing with ``op_type=create`` (See `Elasticsearch indexing <https://www.elastic.co/guide/en/elasticsearch/reference/current/docs-index_.html#operation-type>`_ ) require a Cassandra PAXOS transaction
51
51
to check if the document exists in the underlying datacenter. This comes with useless performance cost if you use automatic generated
52
52
document ID (See `Automatic ID generation <https://www.elastic.co/guide/en/elasticsearch/reference/current/docs-index_.html#_automatic_id_generation>`_.
0 commit comments