Elasticsearch Pending #12842
Replies: 2 comments 1 reply
-
All affected |
Beta Was this translation helpful? Give feedback.
-
Find this method to resolve the problem, but it doesn't working for me sudo so-elasticsearch-query _cluster/health [root@manager-so]# sudo so-elasticsearch-query .kibana_analytics_8.10.4_001/_settings -d '{"number_of_replicas":0}' -XPUT {"error":{"root_cause":[{"type":"security_exception","reason":"action [indices:admin/settings/update] is unauthorized for user [so_elastic] with effective roles [superuser] on restricted indices [.kibana_analytics_8.10.4_001], this action is granted by the index privileges [manage,all]"}],"type":"security_exception","reason":"action [indices:admin/settings/update] is unauthorized for user [so_elastic] with effective roles [superuser] on restricted indices [.kibana_analytics_8.10.4_001], this action is granted by the index privileges [manage,all]"},"status":403}[root@manager-so _state] |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
Version
2.4.60
Installation Method
Security Onion ISO image
Description
other (please provide detail below)
Installation Type
Distributed
Location
on-prem with Internet access
Hardware Specs
Meets minimum requirements
CPU
16
RAM
16
Storage for /
200-300 GB
Storage for /nsm
1,7 TB
Network Traffic Collection
span port
Network Traffic Speeds
Less than 1Gbps
Status
Yes, all services on all nodes are running OK
Salt Status
No, there are no failures
Logs
No, there are no additional clues
Detail
Hello,
I have a problem with ES on the manager-node.
Manager node without search role, and 1 search node. both servers have 16 CPU, 16 GB RAM and 2 TB disks - 1.7TB for nsm.
At the first time I saw Elasticsearch Pending status in the grid and i run "sudo so-elasticsearch-query _cat/shards | grep UN" to see unassigned indices.
Then i run "sudo so-elasticsearch-query $index/_settings -d '{"number_of_replicas":0}' -XPUT" to delete them, but it doesn-t working. All affected indices stayed in place.
I started the so-elasticsaerch-shards-list command and saw that the manager stores data about zeek, suricata, and so on. Previously, I assumed that this data should be stored only on the search node.
Part of unassigned shards which i see. It starts when nsm on search or manager node is more than 85% full i think.
.ds-.fleet-fileds-fromhost-meta-agent-2024.04.12-000001 0 p STARTED 1 16.1kb 10.99.1.40 search-01
.ds-.fleet-fileds-fromhost-meta-agent-2024.04.12-000001 0 r UNASSIGNED
.ds-.fleet-fileds-fromhost-data-agent-2024.04.19-000002 0 p STARTED 0 248b 10.99.1.30 manager-so
.ds-.fleet-fileds-fromhost-data-agent-2024.04.19-000002 0 r UNASSIGNED
.internal.alerts-observability.slo.alerts-default-000001 0 p STARTED 0 248b 10.99.1.30 manager-so
.internal.alerts-observability.slo.alerts-default-000001 0 r UNASSIGNED
.internal.alerts-observability.metrics.alerts-default-000001 0 p STARTED 0 248b 10.99.1.30 manager-so
.internal.alerts-observability.metrics.alerts-default-000001 0 r UNASSIGNED
.internal.alerts-stack.alerts-default-000001 0 p STARTED 0 248b 10.99.1.30 manager-so
.internal.alerts-stack.alerts-default-000001 0 r UNASSIGNED
.kibana-observability-ai-assistant-conversations-000001 0 p STARTED 0 248b 10.99.1.30 manager-so
.kibana-observability-ai-assistant-conversations-000001 0 r UNASSIGNED
.internal.alerts-security.alerts-default-000001 0 p STARTED 0 248b 10.99.1.30 manager-so
.internal.alerts-security.alerts-default-000001 0 r UNASSIGNED
.fleet-servers-7 0 p STARTED 15 174.9kb 10.99.1.30 manager-so
.fleet-servers-7 0 r UNASSIGNED
And always the same indexes. How can I make sure that they are in a single instance only on the manager. And why does a manager without a search role store data about zeek suricata and etc
[root@manager-so adminso]# so-elasticsearch-query _cluster/allocation/explain?pretty
{
"note" : "No shard was specified in the explain API request, so this response explains a randomly chosen unassigned shard. There may be other unassigned shards in this cluster which cannot be assigned for different reasons. It may not be possible to assign this shard until one of the other shards is assigned correctly. To explain the allocation of other shards (whether assigned or unassigned) you must specify the target shard in the request to this API.",
"index" : ".internal.alerts-stack.alerts-default-000001",
"shard" : 0,
"primary" : false,
"current_state" : "unassigned",
"unassigned_info" : {
"reason" : "REPLICA_ADDED",
"at" : "2024-04-23T11:16:34.724Z",
"last_allocation_status" : "no_attempt"
},
"can_allocate" : "no",
"allocate_explanation" : "Elasticsearch isn't allowed to allocate this shard to any of the nodes in the cluster. Choose a node to which you expect this shard to be allocated, find this node in the node-by-node explanation, and address the reasons which prevent Elasticsearch from allocating this shard there.",
"node_allocation_decisions" : [
{
"node_id" : "JRO7648bQdCCoWmyAH8flg",
"node_name" : "manager-so",
"transport_address" : "10.99.1.30:9300",
"node_attributes" : {
"xpack.installed" : "true",
"transform.config_version" : "10.0.0"
},
"node_decision" : "no",
"deciders" : [
{
"decider" : "same_shard",
"decision" : "NO",
"explanation" : "a copy of this shard is already allocated to this node [[.internal.alerts-stack.alerts-default-000001][0], node[JRO7648bQdCCoWmyAH8flg], [P], s[STARTED], a[id=LrrLDd3rRs6fKvWoMD1LHg], failed_attempts[0]]"
},
{
"decider" : "disk_threshold",
"decision" : "NO",
"explanation" : "the node is above the low watermark cluster setting [cluster.routing.allocation.disk.watermark.low=80%], having less than the minimum required [348.5gb] free space, actual free: [291.6gb], actual used: [83.2%]"
}
]
},
{
"node_id" : "fCfpL0iYS3i9tUK980wVlg",
"node_name" : "search-01",
"transport_address" : "10.99.1.40:9300",
"node_attributes" : {
"transform.config_version" : "10.0.0",
"xpack.installed" : "true"
},
"node_decision" : "no",
"deciders" : [
{
"decider" : "disk_threshold",
"decision" : "NO",
"explanation" : "the node is above the low watermark cluster setting [cluster.routing.allocation.disk.watermark.low=80%], having less than the minimum required [348.4gb] free space, actual free: [233.4gb], actual used: [86.6%]"
}
]
}
]
}
Guidelines
Beta Was this translation helpful? Give feedback.
All reactions