Skip to content
Merged
Show file tree
Hide file tree
Changes from 38 commits
Commits
Show all changes
51 commits
Select commit Hold shift + click to select a range
f95eef8
support monitor_stats privilege for remote cluster privileges
jakelandis Oct 16, 2024
89aa208
fix doc test
jakelandis Oct 21, 2024
32017a7
more trivial test fixes
jakelandis Oct 21, 2024
10dda91
fix plural error message
jakelandis Oct 21, 2024
a720c82
fix RestGetUserPrivilegesActionTests
jakelandis Oct 21, 2024
1f7c189
more test fixes
jakelandis Oct 21, 2024
5e3f7a7
spotless
jakelandis Oct 21, 2024
8ce0e2b
fix forbidden api
jakelandis Oct 21, 2024
d827849
prepare for merge
jakelandis Oct 21, 2024
0e0c5db
Merge remote-tracking branch 'upstream/main' into monitor_stats
jakelandis Oct 21, 2024
fa10f80
spotless
jakelandis Oct 21, 2024
4095cca
mo tests and todos
jakelandis Oct 21, 2024
aeb8ff1
give kibana monitor_stats priv
jakelandis Oct 24, 2024
86c0367
integration test
jakelandis Oct 25, 2024
c259d9d
test fixes
jakelandis Oct 28, 2024
3ccc03b
Merge remote-tracking branch 'upstream/main' into monitor_stats
jakelandis Oct 28, 2024
25ee328
remove tab
jakelandis Oct 28, 2024
49a4d1e
fix test
jakelandis Oct 28, 2024
7e4152f
adjustments for RemoteClusterPermissionsTests.java
jakelandis Oct 28, 2024
c380112
wip: properly gate remote cluster permission
jakelandis Oct 29, 2024
a34b620
passing test
jakelandis Oct 30, 2024
d711d15
clean up
jakelandis Oct 30, 2024
e2d4c16
Merge remote-tracking branch 'upstream/main' into monitor_stats
jakelandis Oct 31, 2024
b4cce77
fix tests
jakelandis Oct 31, 2024
531fe9c
better support to strip privs
jakelandis Oct 31, 2024
2cde2f8
mo tests
jakelandis Oct 31, 2024
633aebe
update test
jakelandis Oct 31, 2024
abe85e0
fix test
jakelandis Nov 4, 2024
cb31734
Merge remote-tracking branch 'upstream/main' into monitor_stats
jakelandis Nov 4, 2024
2a5f79e
Merge remote-tracking branch 'upstream/main' into monitor_stats
jakelandis Nov 4, 2024
8264279
introduce RCS2 BWC test and DRY
jakelandis Nov 4, 2024
892ac08
clean up execution
jakelandis Nov 4, 2024
b4b85c8
clean up bwc tests
jakelandis Nov 4, 2024
7e9cb22
fix set logic
jakelandis Nov 4, 2024
d35dee8
Update docs/changelog/114964.yaml
jakelandis Nov 4, 2024
1c749bc
update javadoc / minor rename
jakelandis Nov 6, 2024
0d6acd1
Merge remote-tracking branch 'upstream/main' into monitor_stats
jakelandis Nov 6, 2024
d19a074
test setup clean up
jakelandis Nov 6, 2024
1ee1d43
clean up conditional striping of privs
jakelandis Nov 7, 2024
cdb3791
Merge remote-tracking branch 'upstream/main' into monitor_stats
jakelandis Nov 7, 2024
4b1dd9d
skip test
jakelandis Nov 7, 2024
c734bbb
add trim to string
jakelandis Nov 7, 2024
23279e5
minor rename
jakelandis Nov 7, 2024
3182bcd
s/hasPrivileges/hasAnyPrivileges
jakelandis Nov 7, 2024
7a66dcb
better assert
jakelandis Nov 7, 2024
d1711c4
fix off by one
jakelandis Nov 7, 2024
fd2503e
ensure duplicate names don't cause issue with removal
jakelandis Nov 7, 2024
b816117
typo
jakelandis Nov 7, 2024
d1f4029
short circuit removeUnsupportedPrivileges
jakelandis Nov 7, 2024
31096b7
spotless
jakelandis Nov 7, 2024
ba6b70e
Merge remote-tracking branch 'upstream/main' into monitor_stats
jakelandis Nov 8, 2024
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
6 changes: 6 additions & 0 deletions docs/changelog/114964.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,6 @@
pr: 114964
summary: Add a `monitor_stats` privilege and allow that privilege for remote cluster
privileges
area: Authorization
type: enhancement
issues: []
Original file line number Diff line number Diff line change
Expand Up @@ -315,7 +315,7 @@ The result would then have the `errors` field set to `true` and hold the error f
"details": {
"my_admin_role": { <4>
"type": "action_request_validation_exception",
"reason": "Validation Failed: 1: unknown cluster privilege [bad_cluster_privilege]. a privilege must be either one of the predefined cluster privilege names [manage_own_api_key,manage_data_stream_global_retention,monitor_data_stream_global_retention,none,cancel_task,cross_cluster_replication,cross_cluster_search,delegate_pki,grant_api_key,manage_autoscaling,manage_index_templates,manage_logstash_pipelines,manage_oidc,manage_saml,manage_search_application,manage_search_query_rules,manage_search_synonyms,manage_service_account,manage_token,manage_user_profile,monitor_connector,monitor_enrich,monitor_inference,monitor_ml,monitor_rollup,monitor_snapshot,monitor_text_structure,monitor_watcher,post_behavioral_analytics_event,read_ccr,read_connector_secrets,read_fleet_secrets,read_ilm,read_pipeline,read_security,read_slm,transport_client,write_connector_secrets,write_fleet_secrets,create_snapshot,manage_behavioral_analytics,manage_ccr,manage_connector,manage_enrich,manage_ilm,manage_inference,manage_ml,manage_rollup,manage_slm,manage_watcher,monitor_data_frame_transforms,monitor_transform,manage_api_key,manage_ingest_pipelines,manage_pipeline,manage_data_frame_transforms,manage_transform,manage_security,monitor,manage,all] or a pattern over one of the available cluster actions;"
"reason": "Validation Failed: 1: unknown cluster privilege [bad_cluster_privilege]. a privilege must be either one of the predefined cluster privilege names [manage_own_api_key,manage_data_stream_global_retention,monitor_data_stream_global_retention,none,cancel_task,cross_cluster_replication,cross_cluster_search,delegate_pki,grant_api_key,manage_autoscaling,manage_index_templates,manage_logstash_pipelines,manage_oidc,manage_saml,manage_search_application,manage_search_query_rules,manage_search_synonyms,manage_service_account,manage_token,manage_user_profile,monitor_connector,monitor_enrich,monitor_inference,monitor_ml,monitor_rollup,monitor_snapshot,monitor_stats,monitor_text_structure,monitor_watcher,post_behavioral_analytics_event,read_ccr,read_connector_secrets,read_fleet_secrets,read_ilm,read_pipeline,read_security,read_slm,transport_client,write_connector_secrets,write_fleet_secrets,create_snapshot,manage_behavioral_analytics,manage_ccr,manage_connector,manage_enrich,manage_ilm,manage_inference,manage_ml,manage_rollup,manage_slm,manage_watcher,monitor_data_frame_transforms,monitor_transform,manage_api_key,manage_ingest_pipelines,manage_pipeline,manage_data_frame_transforms,manage_transform,manage_security,monitor,manage,all] or a pattern over one of the available cluster actions;"
}
}
}
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -111,6 +111,7 @@ A successful call returns an object with "cluster", "index", and "remote_cluster
"monitor_ml",
"monitor_rollup",
"monitor_snapshot",
"monitor_stats",
"monitor_text_structure",
"monitor_transform",
"monitor_watcher",
Expand Down Expand Up @@ -152,7 +153,8 @@ A successful call returns an object with "cluster", "index", and "remote_cluster
"write"
],
"remote_cluster" : [
"monitor_enrich"
"monitor_enrich",
"monitor_stats"
]
}
--------------------------------------------------
Original file line number Diff line number Diff line change
Expand Up @@ -188,6 +188,7 @@ static TransportVersion def(int id) {
public static final TransportVersion ESQL_CCS_EXEC_INFO_WITH_FAILURES = def(8_783_00_0);
public static final TransportVersion LOGSDB_TELEMETRY = def(8_784_00_0);
public static final TransportVersion LOGSDB_TELEMETRY_STATS = def(8_785_00_0);
public static final TransportVersion ROLE_MONITOR_STATS = def(8_786_00_0);

/*
* STOP! READ THIS FIRST! No, really,
Expand Down
1 change: 1 addition & 0 deletions x-pack/plugin/build.gradle
Original file line number Diff line number Diff line change
Expand Up @@ -86,5 +86,6 @@ tasks.named("yamlRestCompatTestTransform").configure({ task ->
task.skipTest("esql/60_usage/Basic ESQL usage output (telemetry)", "The telemetry output changed. We dropped a column. That's safe.")
task.skipTest("esql/80_text/reverse text", "The output type changed from TEXT to KEYWORD.")
task.skipTest("esql/80_text/values function", "The output type changed from TEXT to KEYWORD.")
task.replaceValueInLength("cluster", 62, "Test get builtin privileges") // new builtin privileges have been added
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sorry I've stared at this for a bit but I don't fully get it -- why will that test have the wrong value for num privileges? Is it because the test spec is somehow sourced from an older ES version?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is it because the test spec is somehow sourced from an older ES version?

Yup, exactly. The compat tests are sourced from the last minor of the last major (8.x). I also realized that this will break as soon as I back port this code to 8.x, so I will skip it for now. There probably isn't a need to ever re-enable it either, since the number of privileges is not relevant to compatibility.

})

Original file line number Diff line number Diff line change
Expand Up @@ -36,6 +36,7 @@
import org.elasticsearch.xpack.core.security.authc.service.ServiceAccountSettings;
import org.elasticsearch.xpack.core.security.authc.support.AuthenticationContextSerializer;
import org.elasticsearch.xpack.core.security.authz.RoleDescriptor;
import org.elasticsearch.xpack.core.security.authz.permission.RemoteClusterPermissions;
import org.elasticsearch.xpack.core.security.user.AnonymousUser;
import org.elasticsearch.xpack.core.security.user.InternalUser;
import org.elasticsearch.xpack.core.security.user.InternalUsers;
Expand Down Expand Up @@ -76,6 +77,7 @@
import static org.elasticsearch.xpack.core.security.authc.AuthenticationField.FALLBACK_REALM_NAME;
import static org.elasticsearch.xpack.core.security.authc.AuthenticationField.FALLBACK_REALM_TYPE;
import static org.elasticsearch.xpack.core.security.authc.RealmDomain.REALM_DOMAIN_PARSER;
import static org.elasticsearch.xpack.core.security.authz.RoleDescriptor.Fields.REMOTE_CLUSTER;
import static org.elasticsearch.xpack.core.security.authz.permission.RemoteClusterPermissions.ROLE_REMOTE_CLUSTER_PRIVS;

/**
Expand Down Expand Up @@ -233,8 +235,8 @@ public Authentication maybeRewriteForOlderVersion(TransportVersion olderVersion)
+ "]"
);
}

final Map<String, Object> newMetadata = maybeRewriteMetadata(olderVersion, this);

final Authentication newAuthentication;
if (isRunAs()) {
// The lookup user for run-as currently doesn't have authentication metadata associated with them because
Expand Down Expand Up @@ -272,12 +274,23 @@ public Authentication maybeRewriteForOlderVersion(TransportVersion olderVersion)
}

private static Map<String, Object> maybeRewriteMetadata(TransportVersion olderVersion, Authentication authentication) {
if (authentication.isAuthenticatedAsApiKey()) {
return maybeRewriteMetadataForApiKeyRoleDescriptors(olderVersion, authentication);
} else if (authentication.isCrossClusterAccess()) {
return maybeRewriteMetadataForCrossClusterAccessAuthentication(olderVersion, authentication);
} else {
return authentication.getAuthenticatingSubject().getMetadata();
try {
if (authentication.isAuthenticatedAsApiKey()) {
return maybeRewriteMetadataForApiKeyRoleDescriptors(olderVersion, authentication);
} else if (authentication.isCrossClusterAccess()) {
return maybeRewriteMetadataForCrossClusterAccessAuthentication(olderVersion, authentication);
} else {
return authentication.getAuthenticatingSubject().getMetadata();
}
} catch (Exception e) {
// CCS workflows may swallow the exception message making this difficult to troubleshoot, so we explicitly log and re-throw
// here. It may result in duplicate logs, so we only log the message at warn level.
if (logger.isDebugEnabled()) {
logger.debug("Un-expected exception thrown while rewriting metadata. This is likely a bug.", e);
} else {
logger.warn("Un-expected exception thrown while rewriting metadata. This is likely a bug [" + e.getMessage() + "]");
}
throw e;
}
}

Expand Down Expand Up @@ -1323,6 +1336,7 @@ private static Map<String, Object> maybeRewriteMetadataForApiKeyRoleDescriptors(

if (authentication.getEffectiveSubject().getTransportVersion().onOrAfter(ROLE_REMOTE_CLUSTER_PRIVS)
&& streamVersion.before(ROLE_REMOTE_CLUSTER_PRIVS)) {
// if the remote does not understand the remote_cluster field remove it
metadata = new HashMap<>(metadata);
metadata.put(
AuthenticationField.API_KEY_ROLE_DESCRIPTORS_KEY,
Expand All @@ -1336,7 +1350,27 @@ private static Map<String, Object> maybeRewriteMetadataForApiKeyRoleDescriptors(
(BytesReference) metadata.get(AuthenticationField.API_KEY_LIMITED_ROLE_DESCRIPTORS_KEY)
)
);
}
} else if (authentication.getEffectiveSubject()
.getTransportVersion()
.onOrAfter(TRANSPORT_VERSION_ADVANCED_REMOTE_CLUSTER_SECURITY)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think TRANSPORT_VERSION_ADVANCED_REMOTE_CLUSTER_SECURITY should be ROLE_REMOTE_CLUSTER_PRIVS -- otherwise, we'll include versions for which there is no way that the authentication subject would ever have a remote_cluster field to begin with.

Also, I think a comment here explaining that maybeRemoveRemoteClusterPrivilegesFromRoleDescriptors handles more specific transport version checks would help here I think. It breaks with the pattern of

if (authentication.onOrAfter(version) && streamVersions.before(version)) { do something } and looks a bit like a copy-paste bug at first glance

Copy link
Contributor Author

@jakelandis jakelandis Nov 7, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think TRANSPORT_VERSION_ADVANCED_REMOTE_CLUSTER_SECURITY should be ROLE_REMOTE_CLUSTER_PRIVS -- otherwise, we'll include versions for which there is no way that the authentication subject would ever have a remote_cluster field to begin with.

I'll change it, but don't think it matters. IIUC this specific check is really on needed for being able to know, in the local cluster, for any persisted authentication (e.g. watcher, ilm) done with an API Key, if we are working with a byte[] representation or the older format for persisted Authentication objects. So I believe that any value after VERSION_API_KEY_ROLES_AS_BYTES would work fine. I don't have specific test for this, but since we convert byte[] to Map, this check should only be concerned with VERSION_API_KEY_ROLES_AS_BYTES in this cluster. The other check && streamVersion.onOrAfter(ROLE_REMOTE_CLUSTER_PRIVS)) { will protect against any problems with the remote understanding remote_cluster.

For example, if the persisted authentication had the transport version before ROLE_REMOTE_CLUSTER_PRIVS (and sending to a new cluster is new enough), then execution will fall into the else condition but the persisted authentication object would never have a remote_cluster entry and effectively do nothing (OK it does a bit of necessary work). It won't blow up since we are just manipulating json/byte[] and maps. I agree that it confusing as-is and using ROLE_REMOTE_CLUSTER_PRIVS probably helps a bit with readability (and avoid some uncessary work). Long term, we should look for a way to stop explicitly rewriting these things and make it a first class attribute of model that understands authentication and transport version concerns. Fundamentally all of the checks in this method does the same thing...strip out things that streams of specific versions won't understand and IMO it plausible to have a single stragegy (for both RCS and node-to-node in same cluster).

(also updated comments to hopefull be bit more clear)

&& streamVersion.onOrAfter(ROLE_REMOTE_CLUSTER_PRIVS)) {
// the remote does understand the remote_cluster field, so check each permission and remove permission as needed
metadata = new HashMap<>(metadata);
metadata.put(
AuthenticationField.API_KEY_ROLE_DESCRIPTORS_KEY,
maybeRemoveRemoteClusterPrivilegesFromRoleDescriptors(
(BytesReference) metadata.get(AuthenticationField.API_KEY_ROLE_DESCRIPTORS_KEY),
streamVersion
)
);
metadata.put(
AuthenticationField.API_KEY_LIMITED_ROLE_DESCRIPTORS_KEY,
maybeRemoveRemoteClusterPrivilegesFromRoleDescriptors(
(BytesReference) metadata.get(AuthenticationField.API_KEY_LIMITED_ROLE_DESCRIPTORS_KEY),
streamVersion
)
);
}

if (authentication.getEffectiveSubject().getTransportVersion().onOrAfter(VERSION_API_KEY_ROLES_AS_BYTES)
&& streamVersion.before(VERSION_API_KEY_ROLES_AS_BYTES)) {
Expand Down Expand Up @@ -1417,7 +1451,7 @@ private static BytesReference convertRoleDescriptorsMapToBytes(Map<String, Objec
}

static BytesReference maybeRemoveRemoteClusterFromRoleDescriptors(BytesReference roleDescriptorsBytes) {
return maybeRemoveTopLevelFromRoleDescriptors(roleDescriptorsBytes, RoleDescriptor.Fields.REMOTE_CLUSTER.getPreferredName());
return maybeRemoveTopLevelFromRoleDescriptors(roleDescriptorsBytes, REMOTE_CLUSTER.getPreferredName());
}

static BytesReference maybeRemoveRemoteIndicesFromRoleDescriptors(BytesReference roleDescriptorsBytes) {
Expand Down Expand Up @@ -1450,6 +1484,66 @@ static BytesReference maybeRemoveTopLevelFromRoleDescriptors(BytesReference role
}
}

/**
* Before we send the role descriptors to the remote cluster, we need to remove the remote cluster privileges that the other cluster
* will not understand. If all privileges are removed, then the entire "remote_cluster" is removed to avoid sending empty privileges.
* @param roleDescriptorsBytes The role descriptors to be sent to the remote cluster, represented as bytes.
* @return The role descriptors with the privileges that unsupported by version removed, represented as bytes.
*/
@SuppressWarnings("unchecked")
static BytesReference maybeRemoveRemoteClusterPrivilegesFromRoleDescriptors(
BytesReference roleDescriptorsBytes,
TransportVersion outboundVersion
) {
if (roleDescriptorsBytes == null || roleDescriptorsBytes.length() == 0) {
return roleDescriptorsBytes;
}
final Map<String, Object> roleDescriptorsMap = convertRoleDescriptorsBytesToMap(roleDescriptorsBytes);
final Map<String, Object> roleDescriptorsMapMutated = new HashMap<>(roleDescriptorsMap);
final AtomicBoolean modified = new AtomicBoolean(false);
roleDescriptorsMap.forEach((key, value) -> {
if (value instanceof Map) {
Map<String, Object> roleDescriptor = (Map<String, Object>) value;
roleDescriptor.forEach((innerKey, innerValue) -> {
// example: remote_cluster=[{privileges=[monitor_enrich, monitor_stats]
if (REMOTE_CLUSTER.getPreferredName().equals(innerKey)) {
assert innerValue instanceof List;
RemoteClusterPermissions discoveredRemoteClusterPermission = new RemoteClusterPermissions(
(List<Map<String, List<String>>>) innerValue
);
RemoteClusterPermissions mutated = discoveredRemoteClusterPermission.removeUnsupportedPrivileges(outboundVersion);
if (mutated.equals(discoveredRemoteClusterPermission) == false) {
// swap out the old value with the new value
modified.set(true);
Map<String, Object> remoteClusterMap = new HashMap<>((Map<String, Object>) roleDescriptorsMapMutated.get(key));
if (mutated.hasPrivileges()) {
// has at least one group with privileges
remoteClusterMap.put(innerKey, mutated.toMap());
} else {
// has no groups with privileges
remoteClusterMap.remove(innerKey);
}
roleDescriptorsMapMutated.put(key, remoteClusterMap);
}
}
});
}
});
if (modified.get()) {
logger.debug(
"mutated role descriptors. Changed from {} to {} for outbound version {}",
roleDescriptorsMap,
roleDescriptorsMapMutated,
outboundVersion
);
return convertRoleDescriptorsMapToBytes(roleDescriptorsMapMutated);
} else {
// No need to serialize if we did not change anything.
logger.trace("no change to role descriptors {} for outbound version {}", roleDescriptorsMap, outboundVersion);
return roleDescriptorsBytes;
}
}

static boolean equivalentRealms(String name1, String type1, String name2, String type2) {
if (false == type1.equals(type2)) {
return false;
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -6,6 +6,8 @@
*/
package org.elasticsearch.xpack.core.security.authz;

import org.apache.logging.log4j.LogManager;
import org.apache.logging.log4j.Logger;
import org.elasticsearch.ElasticsearchParseException;
import org.elasticsearch.ElasticsearchSecurityException;
import org.elasticsearch.TransportVersion;
Expand Down Expand Up @@ -62,6 +64,7 @@ public class RoleDescriptor implements ToXContentObject, Writeable {
public static final TransportVersion SECURITY_ROLE_DESCRIPTION = TransportVersions.V_8_15_0;

public static final String ROLE_TYPE = "role";
private static final Logger logger = LogManager.getLogger(RoleDescriptor.class);

private final String name;
private final String[] clusterPrivileges;
Expand Down Expand Up @@ -830,25 +833,32 @@ private static RemoteClusterPermissions parseRemoteCluster(final String roleName
currentFieldName = parser.currentName();
} else if (Fields.PRIVILEGES.match(currentFieldName, parser.getDeprecationHandler())) {
privileges = readStringArray(roleName, parser, false);
if (privileges.length != 1
|| RemoteClusterPermissions.getSupportedRemoteClusterPermissions()
.contains(privileges[0].trim().toLowerCase(Locale.ROOT)) == false) {
throw new ElasticsearchParseException(
"failed to parse remote_cluster for role [{}]. "
+ RemoteClusterPermissions.getSupportedRemoteClusterPermissions()
+ " is the only value allowed for [{}] within [remote_cluster]",
if (Arrays.stream(privileges)
.map(s -> s.toLowerCase(Locale.ROOT))
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not sure that we need this but previously we had a trim() call do we want to keep that here as well?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

good catch, will add trim()

.allMatch(RemoteClusterPermissions.getSupportedRemoteClusterPermissions()::contains) == false) {
final String message = String.format(
Locale.ROOT,
"failed to parse remote_cluster for role [%s]. "
+ "%s are the only values allowed for [%s] within [remote_cluster]. Found %s",
roleName,
currentFieldName
RemoteClusterPermissions.getSupportedRemoteClusterPermissions(),
currentFieldName,
Arrays.toString(privileges)
);
logger.info(message);
throw new ElasticsearchParseException(message);
}
} else if (Fields.CLUSTERS.match(currentFieldName, parser.getDeprecationHandler())) {
clusters = readStringArray(roleName, parser, false);
} else {
throw new ElasticsearchParseException(
"failed to parse remote_cluster for role [{}]. unexpected field [{}]",
final String message = String.format(
Locale.ROOT,
"failed to parse remote_cluster for role [%s]. unexpected field [%s]",
roleName,
currentFieldName
);
logger.info(message);
throw new ElasticsearchParseException(message);
}
}
if (privileges != null && clusters == null) {
Expand Down
Loading