diff --git a/docs/.vuepress/public/assets/img/ask-clusters-as-nodes.png b/docs/.vuepress/public/assets/img/ask-clusters-as-nodes.png
new file mode 100644
index 000000000..c92170c28
Binary files /dev/null and b/docs/.vuepress/public/assets/img/ask-clusters-as-nodes.png differ
diff --git a/docs/.vuepress/public/assets/img/aws-eks-node-source.png b/docs/.vuepress/public/assets/img/aws-eks-node-source.png
new file mode 100644
index 000000000..b262b4d1b
Binary files /dev/null and b/docs/.vuepress/public/assets/img/aws-eks-node-source.png differ
diff --git a/docs/.vuepress/public/assets/img/azure-add-role-assignment.png b/docs/.vuepress/public/assets/img/azure-add-role-assignment.png
new file mode 100644
index 000000000..0330857e2
Binary files /dev/null and b/docs/.vuepress/public/assets/img/azure-add-role-assignment.png differ
diff --git a/docs/.vuepress/public/assets/img/azure-select-members.png b/docs/.vuepress/public/assets/img/azure-select-members.png
new file mode 100644
index 000000000..295282755
Binary files /dev/null and b/docs/.vuepress/public/assets/img/azure-select-members.png differ
diff --git a/docs/.vuepress/public/assets/img/eks-api-access-configuration.png b/docs/.vuepress/public/assets/img/eks-api-access-configuration.png
new file mode 100644
index 000000000..6e684fcd0
Binary files /dev/null and b/docs/.vuepress/public/assets/img/eks-api-access-configuration.png differ
diff --git a/docs/.vuepress/public/assets/img/eks-clusters-as-nodes.png b/docs/.vuepress/public/assets/img/eks-clusters-as-nodes.png
new file mode 100644
index 000000000..a02d7bbe7
Binary files /dev/null and b/docs/.vuepress/public/assets/img/eks-clusters-as-nodes.png differ
diff --git a/docs/.vuepress/public/assets/img/eks-create-access-entry.png b/docs/.vuepress/public/assets/img/eks-create-access-entry.png
new file mode 100644
index 000000000..4b3d3df93
Binary files /dev/null and b/docs/.vuepress/public/assets/img/eks-create-access-entry.png differ
diff --git a/docs/.vuepress/public/assets/img/gke-cluster-as-node.png b/docs/.vuepress/public/assets/img/gke-cluster-as-node.png
new file mode 100644
index 000000000..32a8dd67a
Binary files /dev/null and b/docs/.vuepress/public/assets/img/gke-cluster-as-node.png differ
diff --git a/docs/.vuepress/public/assets/img/gke-cluster-viewer-role.png b/docs/.vuepress/public/assets/img/gke-cluster-viewer-role.png
new file mode 100644
index 000000000..484be234a
Binary files /dev/null and b/docs/.vuepress/public/assets/img/gke-cluster-viewer-role.png differ
diff --git a/docs/.vuepress/public/assets/img/gke-node-source-unauthorized-error.png b/docs/.vuepress/public/assets/img/gke-node-source-unauthorized-error.png
new file mode 100644
index 000000000..56c08d671
Binary files /dev/null and b/docs/.vuepress/public/assets/img/gke-node-source-unauthorized-error.png differ
diff --git a/docs/.vuepress/public/assets/img/k8s-cloud-provider-architecture.png b/docs/.vuepress/public/assets/img/k8s-cloud-provider-architecture.png
new file mode 100644
index 000000000..80055195a
Binary files /dev/null and b/docs/.vuepress/public/assets/img/k8s-cloud-provider-architecture.png differ
diff --git a/docs/.vuepress/public/assets/img/node-source-unauthorized-error.png b/docs/.vuepress/public/assets/img/node-source-unauthorized-error.png
new file mode 100644
index 000000000..5cdd17429
Binary files /dev/null and b/docs/.vuepress/public/assets/img/node-source-unauthorized-error.png differ
diff --git a/docs/manual/plugins/kubernetes-plugins-overview.md b/docs/manual/plugins/kubernetes-plugins-overview.md
index d6ab16fb4..6d9289be9 100644
--- a/docs/manual/plugins/kubernetes-plugins-overview.md
+++ b/docs/manual/plugins/kubernetes-plugins-overview.md
@@ -1,20 +1,38 @@
-# Kubernetes Plugins
+# Kubernetes Integration Overview
:::enterprise
:::

Runbook Automation integrates with Kubernetes through a variety of plugins. By integrating Runbook Automation with Kubernetes, users can automate and provide self-service interfaces for operations in their Kubernetes Clusters.
-:::warning Open Source Plugins
-This document covers the plugins available in the commercial Runbook Automation products. For a list of Kubernetes plugins available for Rundeck Community (open-source), see documentation for the [**Open Source Kubernetes plugins**](/manual/plugins/kubernetes-open-source.md).
+ Kubernetes Plugins Available in Runbook Automation
+
+
+| Plugin Name | Plugin Type | Description |
+|:---------------------------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------------:|:----------------------------------------------------------------------------------------------|
+| [**Amazon EKS Node Source**](/manual/projects/resource-model-sources/aws-eks.md) | Node Source | Imports Amazon Web Services EKS Clusters as Nodes. |
+| [**Azure AKS Node Source**](/manual/projects/resource-model-sources/azure-aks.md) | Node Source | Imports Azure AKS Clusters as Nodes. |
+| [**Google Cloud GKE Node Source**](/manual/projects/resource-model-sources/gcp-gke.md) | Node Source | Imports Google Cloud GKE Clusters as Nodes. |
+| [**Kubernetes Cluster Create Object**](/manual/jobs/job-plugins/node-steps/kubernetes-create-object) | Node Step | This plugin creates an object of a selected kind within a Kubernetes cluster. |
+| [**Kubernetes Cluster Delete Object**](/manual/jobs/job-plugins/node-steps/kubernetes-delete-object) | Node Step | This plugin deletes an object of a selected kind within a Kubernetes cluster. |
+| [**Kubernetes Cluster Describe Object**](/manual/jobs/job-plugins/node-steps/kubernetes-describe-object) | Node Step | This plugin describes an object of a selected kind within a Kubernetes cluster. |
+| [**Kubernetes Cluster List Objects**](/manual/jobs/job-plugins/node-steps/kubernetes-list-objects) | Node Step | This plugin lists objects of a selected kind within a Kubernetes cluster. |
+| [**Kubernetes Cluster Object Logs**](/manual/jobs/job-plugins/node-steps/kubernetes-object-logs) | Node Step| This plugin allows you to view the logs of an object within a Kubernetes cluster. |
+| [**Kubernetes Cluster Run Command**](/manual/jobs/job-plugins/node-steps/kubernetes-run-command) | Node Step | This plugin allows you to execute a command in a pod within a Kubernetes cluster. |
+| [**Kubernetes Cluster Run Script**](/manual/jobs/job-plugins/node-steps/kubernetes-run-script) | Node Step | This plugin executes a script using a predefined container image within a Kubernetes cluster. |
+| [**Kubernetes Cluster Update Object**](/manual/jobs/job-plugins/node-steps/kubernetes-update-object) | Node Step | This plugin updates a specified object of a selected kind within a Kubernetes cluster. |
+
+
+
+
+:::warning Commercial Plugins
+This document covers the plugins available in the **commercial Runbook Automation products**. For a list of Kubernetes plugins available for **Rundeck Community (open-source)**, see documentation for the [**Open Source Kubernetes plugins**](/manual/plugins/kubernetes-open-source.md).
:::
-## Kubernetes Plugins in Runbook Automation
+## Adding Clusters & Authenticating with Kubernetes API
+There are multiple methods for adding Kubernetes clusters to Runbook Automation:
-### Cluster Discovery & Authentication Options
-There are multiple methods for adding Kubernetes clusters to Runbook Automation and authenticating with the Kubernetes API:
-
-1. [**Pod-based Service Account**](#pod-based-service-account): Install a Runner in each cluster (or namespace), and target the Runner as the cluster or particular namespace. The Runner uses the Service Account of the pod that it is hosted in to authenticate with the Kubernetes API.
+1. [**Runners with Pod-based Service Account**](#runners-with-pod-based-service-account): Install a Runner in each cluster (or namespace), and target the Runner as the cluster or particular namespace. The Runner uses the Service Account of the pod that it is hosted in to authenticate with the Kubernetes API.
2. [**Cloud Provider Integration**](#cloud-provider-integration): Use the cloud provider's API to dynamically retrieve all clusters and add them as nodes to the inventory. The cloud provider's API can also optionally be used to retrieve the necessary Kubernetes authentication to communicate with the clusters.
3. [**Manual Authentication Configuration**](#manual-authentication-configuration): Clusters are added to the inventory either manually or through method 1 or 2. The Kubernetes API Token or Kube Config file is manually added to Key Storage and configured as node-attributes.
@@ -22,7 +40,7 @@ There are multiple methods for adding Kubernetes clusters to Runbook Automation
Note that all of these methods require the use of the **Automatic** mode for the Project's use of Runners. See [this documentation](/administration/runner/runner-management/project-dispatch-configuration.md) to confirm that your project is configured correctly.
:::
-### Pod-based Service Account
+### Runners with Pod-based Service Account
With this method, clusters are added to the inventory by installing a Runner in the cluster and adding the Runner as a node to the inventory. The Runner uses the Service Account of the pod that it is hosted in to authenticate with the Kubernetes API.
@@ -97,7 +115,12 @@ The Runner will now be able to authenticate with the Kubernetes API using the Se
The Cloud Provider Integration method can be used to dynamically retrieve all clusters from the cloud provider's API and add them as nodes to the inventory.
The cloud provider's API can _also_ be used to retrieve the necessary Kubernetes authentication to communicate with the clusters.
-#### Cloud Provider for Cluster Discovery
+:::tip Cloud Provider for Discovery and Pod Service Account for Authentication
+It is possible to use the Cloud Provider Integration method for cluster discovery and the Pod-based Service Account method for authentication. This is useful when you want to dynamically discover clusters but have a 1:1 relationship between Runners and clusters or do not have the option to use the cloud provider for retrieving cluster credentials.
+To take this approach, be sure to select the **Use Pod Service Account for Node Steps** when configuring the Node Source plugins.
+:::
+
+### Cloud Provider for Cluster Discovery
Use the Node Source plugins for the cloud provider to add the clusters to the Node Inventory:
@@ -107,17 +130,96 @@ Use the Node Source plugins for the cloud provider to add the clusters to the No
Note that a Runner does _not_ need to be installed to configure these Node Source plugins.
-#### Cloud Provider for Kubernetes Authentication
+### Cloud Provider for Kubernetes Authentication
-The Cloud Provider Integration method can also be used to retrieve the necessary Kubernetes authentication to communicate with the clusters.
-This is useful when there are multiple clusters and you wish to have a single Runner that can communicate with all of them.
+Runbook Automation can use its integration with the public cloud providers to retrieve credentials to authenticate with the Kubernetes clusters.
-Follow the instructions in the **Node Source Plugins** linked in the prior sections to use the Cloud Provider Integration method.
+This method of authentication is useful when:
+1. Installing a Runner inside the clusters is not an option
+2. There are numerous clusters and it is preferred to have a one-to-many relationship between the Runner and the clusters
+
+With this approach, a single Runner is installed in either a VM or a container that has a path to communicate with the clusters. The Runner uses the cloud provider's API to retrieve the necessary Kubernetes authentication to communicate with the clusters:
+
+
+
+### AWS EKS Authentication
+
+To authenticate with EKS clusters using the AWS APIs:
+
+1. Install a Runner in an EC2 instance or a container that has access to the EKS clusters.
+
+2. Assign permissions to the IAM role of this EC2 or container to allow the Runner to retrieve the necessary EKS cluster credentials:
+ - **`eks:DescribeCluster`**
+3. Add the **EKS API** as an authentication mode and add the IAM Role of the Runner's host (EC2 or container) to the target clusters:
+
+ :::info Repeat for each target cluster
+ This process must be repeated for each target cluster so that the Runner can authenticate with each cluster.
+ :::
+ :::tabs
+ @tab AWS Console
+ 1. Navigate to the **EKS Console**.
+ 2. Select the target cluster and click on the **Access** tab and click **Manage Access**.
+ 
+ 3. Select either **EKS API** or **EKS API and ConfigMap**.
+ 4. Click **Save Changes**.
+ 5. Now in the **IAM access entries** section, click on **Create access entry**:
+ 
+ 6. In the **IAM principal** section, select the IAM Role of the Runner's host (EC2 or container).
+ 7. Select **Standard** for the **Type**.
+ 8. On the next screen, assign the desired **Policy Name** and **Access Scope** for this entry.
+ 9. On the next screen, click **Create**.
+
+ @tab CLI
+ 1. Install the AWS CLI as described [here](https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-install.html)
+ 2. Run the following command to add the **EKS API** as an access mode:
+ ```
+ aws eks update-cluster-config --name my-cluster --access-config authenticationMode=API_AND_CONFIG_MAP
+ ```
+ 3. Create an access entry for the IAM Role of the Runner's host (EC2 or container). Here is an example command, but additional examples can be found in the [official AWS documentation](https://docs.aws.amazon.com/eks/latest/userguide/creating-access-entries.html):
+ ```
+ aws eks create-access-entry --cluster-name my-cluster --principal-arn arn:aws:iam::111122223333:role/EKS-my-cluster-self-managed-ng-1 --type STANDARD
+ ```
+ Replace **`arn:aws:iam::111122223333:role/EKS-my-cluster-self-managed-ng-1`** with the IAM Role of the Runner's host.
+ :::tabs
+
+Now when the Runner targets the EKS clusters using the Kubernetes node-step plugins, it will be able to authenticate with the clusters using credentials fetched from AWS.
+
+### Azure AKS Authentication
+
+To authenticate with AKS clusters using the Azure APIs:
+
+1. Install a Runner in a VM or container that has a network path to the target AKS clusters.
+
+2. Follow the instructions in the [Azure Plugins Overview](/manual/plugins/azure-plugins-overview.md) to create a Service Principal and add the credentials for this Service Principal to Runbook Automation.
+ :::info Pre-existing Service Principal
+ If Runbook Automation has already been integrated with Azure, then you may not need to create a new Service Principal. Instead, add these permissions to the existing Service Principal.
+ :::
+3. A assign permissions that allow this Service Principal to retrieve AKS cluster credentials:
+ - **`Microsoft.ContainerService/managedClusters/listClusterUserCredential`**
+ - Azure provides pre-built roles that have this permission, such as **Azure Kubernetes Service Cluster User Role**.
+ :::tip Role Assignment Scope
+ The role assignment of these permissions can be assigned at the **Subscription**, **Resource Group** or even on an individual cluster basis.
+ Regardless of the chosen scope, navigate to the **Access Control (IAM)** section and add the role assignment.
+ :::
+
+Now when the Runner targets the AKS clusters using the Kubernetes node-step plugins, it will be able to authenticate with the clusters using credentials fetched from Azure.
+
+### Google Cloud GKE Authentication
+
+To authenticate with GKE clusters using the Google Cloud APIs:
+
+1. Install a Runner in a VM or container that has a network path to the target GKE clusters.
+
+2. Follow the instructions in the [Google Cloud Plugins Overview](/manual/plugins/gcp-plugins-overview.md) to create a Service Account and add the credentials for this Service Account to Runbook Automation.
+ :::info Pre-existing Service Account
+ If Runbook Automation has already been integrated with Google Cloud, then you may not need to create a new Service Account. Instead, add these permissions to the existing Service Account.
+ :::
+3. Add a Role to the Service Account that has the permissions to retrieve the cluster credentials from the GKE service:
+ - **`container.clusters.get`**
+ - A predefined role, such as **Kubernetes Engine Developer**, can be used for this purpose.
+
+Now when the Runner targets the GKE clusters using the Kubernetes node-step plugins, it will be able to authenticate with the clusters using credentials fetched from Google Cloud.
-:::tip Cloud Provider for Discovery and Pod Service Account for Authentication
-It is possible to use the Cloud Provider Integration method for cluster discovery and the Pod-based Service Account method for authentication. This is useful when you want to dynamically discover clusters but have a 1:1 relationship between Runners and clusters or do not have the option to use the cloud provider for retrieving cluster credentials.
-To take this approach, be sure to select the **Use Pod Service Account for Node Steps** when configuring the Node Source plugins.
-:::
### Manual Authentication Configuration
diff --git a/docs/manual/projects/resource-model-sources/aws-eks.md b/docs/manual/projects/resource-model-sources/aws-eks.md
index b6253d033..ced75976a 100644
--- a/docs/manual/projects/resource-model-sources/aws-eks.md
+++ b/docs/manual/projects/resource-model-sources/aws-eks.md
@@ -1,47 +1,79 @@
-# AWS EKS Resource Model Source
+# AWS EKS Node Source
::: enterprise
:::
-The AWS EKS (Elastic Kubernetes Service) Resource Model Source allows you to import your EKS clusters as nodes within Runbook Automation. This enables you to manage and execute jobs on your Kubernetes clusters directly from Runbook Automation.
+The AWS EKS (Elastic Kubernetes Service) Node Source can be used to dynamically retrieve EKS clusters and add them as nodes to the node inventory. As new clusters are created or removed, the inventory will be automatically updated.
-### Configuration
+## Configuration
-To configure the AWS EKS Resource Model Source:
+### Prerequisites
-1. In your project, go to "Project Settings" > "Edit Nodes".
-2. Click "Add a new Node Source".
-3. Select "AWS EKS Clusters" from the list of available node sources.
-4. Configure the following settings:
+Before configuring the AWS EKS Node Source, the following permissions must be added to the IAM Role associated with the Runbook Automation instance:
- - **AWS Region**: The AWS region or regions where your EKS clusters are located.
- - **Assume Role ARN**: Optionally specify an IAM Role ARN to assume for retrieving EKS Clusters.
- - **Access Key ID**: The path to your AWS access key in Key Storage.
- authentication.
+```
+eks:DescribeCluster
+eks:ListClusters
+```
+For steps on how to associate an IAM Role with Runbook Automation (SaaS or Self-Hosted), refer to the [AWS Plugins Overview](/manual/plugins/aws-plugins-overview.md).
-### Authentication
+### Add EKS Node Source
-You can configure AWS credentials at three levels:
+To configure the AWS EKS Node Source:
-1. Resource Model Configuration
-2. Plugin Group Properties
+1. In your project, go to "Project Settings" > "Edit Nodes".
-### Node Attributes
+2. Click "Add a new Node Source".
+3. Select "AWS Kubernetes Clusters" from the list of available node sources:
+ 
+4. **Region**: The AWS region or regions where your EKS clusters are located.
+5. **Use Pod Service Account for Node Steps**: Select this option if you intend to deploy the Enterprise Runner into these clusters and use the pod service account for authentication.
+ :::tip Using Pod Service Account Through Runners
+ This option is useful when you want to dynamically discover clusters using the EKS integration, but have a 1:1 relationship between Runners and clusters or do not have the option to use the cloud provider for retrieving cluster credentials.
+
+ For instructions on how to use the pod service account as well as more detail on the various cluster authentication methods, see the [Kubernetes Plugins Overview](/manual/plugins/kubernetes-plugins-overview.md).
+ :::
+6. **Assume Role ARN**: Optionally specify an IAM Role ARN to assume for retrieving EKS Clusters. This is useful when you want to target EKS Clusters across multiple AWS Accounts within a single Runbook Automation Project.
+7. Click "Save".
+
+### Clusters in the Node Inventory
Each EKS cluster will be represented as a node with the following attributes:
-- `gcp-location`: The AWS region/zone of the cluster
-- `kubernetes-cluster-endpoint`: The API server endpoint of the cluster
-- `kubernetes-use-pod-service-account`: Whether to use pod service account for authentication
-- `kubernetes-cloud-provider`: Set to "aws-eks"
+- **`AWS-EKS:region`**: The AWS region of the cluster
+- **`AWS-EKS:cluster-status`**: The status of the cluster
+- **`AWS-EKS:cluster-version`**: The Kubernetes version of the cluster
+- **`kubernetes-cluster-endpoint`**: The API server endpoint of the cluster
+- **`kubernetes-use-pod-service-account`**: Whether to use pod service account for authentication
+- **`kubernetes-cloud-provider`**: "aws-eks"
+
+
+
+If tags are associated with the EKS clusters, they will be added as node attributes as well.
### Troubleshooting
-If you encounter issues:
+#### Node Source Unauthorized Error
+
+**Some Node Source returned an "Unauthorized" message**: This error indicates that the proper ACL permissions are not configured for the node sources within this project to access the necessary secrets within key storage:
+
+
+
+To resolve this issue, add an ACL Policy that grants the necessary permissions to the node sources within this project to access the required secrets:
+Here is an example ACL Policy that grants the `platform-engineering` project access to the `keys/project/platform-engineering` directory within Key Storage:
+```yaml
+by:
+ urn: project:platform-engineering
+context:
+ application: rundeck
+for:
+ storage:
+ - match:
+ path: 'keys/project/platform-engineering/.*'
+ allow: [read]
+description: Allow access to key storage
+```
-1. Check the logs for any error messages.
-2. Verify your AWS credentials and permissions.
-3. Ensure your EKS cluster is running and accessible.
-4. Check network connectivity between Runbook Automation and your AWS resources.
+#### AWS EKS Clusters Not Found
-For more detailed information, refer to the [AWS EKS documentation](https://docs.aws.amazon.com/eks/latest/userguide/what-is-eks.html)
\ No newline at end of file
+If the EKS Node Source does not return any clusters, ensure that the IAM Role associated with Runbook Automation has the necessary permissions to describe and list EKS clusters. The IAM Role should have the permissions outlined in the [Prerequisites](#prerequisites) section.
\ No newline at end of file
diff --git a/docs/manual/projects/resource-model-sources/azure-aks.md b/docs/manual/projects/resource-model-sources/azure-aks.md
index 1902d9185..523754cbc 100644
--- a/docs/manual/projects/resource-model-sources/azure-aks.md
+++ b/docs/manual/projects/resource-model-sources/azure-aks.md
@@ -3,24 +3,91 @@
::: enterprise
:::
-The Azure AKS (Azure Kubernetes Service) Resource Model Source allows you to import your AKS clusters as nodes within Runbook Automation. This plugin provides node source functionality for managing and executing jobs on your Azure Kubernetes clusters directly from Runbook Automation.
+The Azure AKS (Azure Kubernetes Service) Node Source can be used to dynamically retrieve AKS clusters and add them as nodes to the node inventory. As new clusters are created or removed, the inventory will be automatically updated.
-### Configuration
+## Configuration
-To configure the Azure AKS Resource Model Source:
+### Prerequisites
+
+Before configuring the Azure AKS Node Source, the following permissions must be added to the Azure Service Principal associated with the Runbook Automation instance:
+```
+Microsoft.ContainerService/managedClusters/read
+```
+
+1. Create a service principal and add the credential to Runbook Automation. If you have already created a service principal and added the credentials to Runbook Automation, skip to step 2. Otherwise, refer to the [Azure Plugins Overview](/manual/plugins/azure-plugins-overview).
+
+2. Navigate to either the **Subscription** or **Resource Group** where your Kubernetes clusters reside.
+
+3. Click on **Access Control (IAM)** -> **Add** -> **Add Role Assignment**:
+ 
+
+4. You can then use any of the Roles that have **`Microsoft.ContainerService/managedClusters/read`** as a permission.
+ * There are many Built In Roles that Azure provides that has this permission, such as **Azure Kubernetes Service Cluster Monitoring User**.
+
+5. Select a role and then on the next screen click **+Select Members** and then select your App Registration (service principle) from the list:
+ 
+
+6. Click **Review + assign** and then **Save**.
+
+### Add AKS Node Source in Runbook Automation
+To configure the Azure AKS Node Source:
+
+1. In your project, go to "Project Settings" > "Edit Nodes".
-1. In your project, go to "Project Settings" > "Edit Nodes".
2. Click "Add a new Node Source".
3. Select "Azure Kubernetes Clusters" from the list of available node sources.
-4. Configure the following settings:
-- **Subscription**: The Azure Subscription ID. If not provided, the value from the Azure plugin group at the Project or System context will be used.
-- **Tenant ID**: The Azure Tenant ID. If not provided, the value from the Azure plugin group at the Project or System context will be used.
-- **Client ID**: The path to the Key Storage entry for the Azure Client ID. If not provided, the Client ID value from the Azure plugin group at the Project or System context will be used. If an "Unauthorized" error occurs, ensure that the proper policy is added to ACLs.
-- **Azure Client Secret**: The path to the Key Storage entry for the Client Secret. If not provided, the value from the Azure plugin group at the Project or System context will be used. If an "Unauthorized" error occurs, ensure that the proper policy is added to ACLs.
-- **Resource Group**: Optionally filter the clusters listed from a specific Resource Group.
-- **Use Pod Service Account for Node Steps**: Choose whether to authenticate with the Pod Service Account for Job steps. Set to `True` if Runbook Automation or a Runner is executing within the targeted cluster.
+4. If credentials for the Azure service principal have already been added to the **Project** or **System Configuration** then adding them to this Node Source is optional. Adding them here will override the Project or System Configuration.
+ - **Subscription**: The Azure Subscription ID. If not provided, the value from the Azure plugin group at the Project or System context will be used.
+ - **Tenant ID**: The Azure Tenant ID. If not provided, the value from the Azure plugin group at the Project or System context will be used.
+ - **Client ID**: The path to the Key Storage entry for the Azure Client ID. If not provided, the Client ID value from the Azure plugin group at the Project or System context will be used. If an "Unauthorized" error occurs, ensure that the proper policy is added to ACLs.
+ - **Azure Client Secret**: The path to the Key Storage entry for the Client Secret. If not provided, the value from the Azure plugin group at the Project or System context will be used. If an "Unauthorized" error occurs, ensure that the proper policy is added to ACLs.
+5. **Resource Group**: Optionally filter the clusters listed from a specific Resource Group.
+6. **Use Pod Service Account for Node Steps**: Choose whether to authenticate with the Pod Service Account for Job steps.
+ :::tip Using Pod Service Account Through Runners
+ This option is useful when you want to dynamically discover clusters using the AKS integration, but have a 1:1 relationship between Runners and clusters or do not have the option to use the cloud provider for retrieving cluster credentials.
+
+ For instructions on how to use the pod service account as well as more detail on the various cluster authentication methods, see the [Kubernetes Plugins Overview](/manual/plugins/kubernetes-plugins-overview.md).
+ :::
+7. Click **Save**.
+
+### Clusters in the Node Inventory
+
+Each AKS cluster will be represented as a node within the node inventory:
+
+
+
+By default, the following attributes will be added to each node:
+
+* **`Azure-Kubernetes:resource-group`**: The Azure Resource Group where the cluster resides.
+* **`Azure-Kubernetes:region`**: The Azure region where the cluster is located.
+* **`Azure-Kubernetes:power-state`**: The power-state of the cluster.
+* **`Azure-Kubernetes:node-resource-group`**: The Azure Resource Group where the nodes reside.
+* **`Azure-Kubernetes:cluster-id`**: The Azure Cluster ID.
+* **`kubernetes-cloud-provider`**: The cloud provider, which will be "azure-kubernetes".
+
+### Troubleshooting
+
+#### Node Source Unauthorized Error
+
+**Some Node Source returned an "Unauthorized" message**: This error indicates that the proper ACL permissions are not configured for the node sources within this project to access the necessary secrets within key storage:
+
-## Authentication
+To resolve this issue, add an ACL Policy that grants the necessary permissions to the node sources within this project to access the required secrets:
+Here is an example ACL Policy that grants the `platform-engineering` project access to the `keys/project/platform-engineering` directory within Key Storage:
+```yaml
+by:
+ urn: project:platform-engineering
+context:
+ application: rundeck
+for:
+ storage:
+ - match:
+ path: 'keys/project/platform-engineering/.*'
+ allow: [read]
+description: Allow access to key storage
+```
-Follow the steps outlined in the [Azure Plugins Overview](/manual/plugins/azure-plugins-overview) to generate the necessary Azure credentials and set them up at the project or system level. These credentials can be overridden in individual Azure Kubernetes Clusters Node Source configurations if needed.
\ No newline at end of file
+#### Azure AKS Clusters Not Found
+If the Azure AKS Node Source does not return any clusters, ensure that the Azure Service Principal has the necessary permissions to read the AKS clusters.
+The Service Principal must have the **`Microsoft.ContainerService/managedClusters/read`** permission.
\ No newline at end of file
diff --git a/docs/manual/projects/resource-model-sources/gcp-gke.md b/docs/manual/projects/resource-model-sources/gcp-gke.md
index 1a85a182d..aa2c9f795 100644
--- a/docs/manual/projects/resource-model-sources/gcp-gke.md
+++ b/docs/manual/projects/resource-model-sources/gcp-gke.md
@@ -3,65 +3,91 @@
::: enterprise
:::
-The GCP GKE (Google Kubernetes Engine) Resource Model Source allows you to import your GKE clusters as nodes within Runbook Automation. This plugin provides node source functionality for managing and executing jobs on your Google Cloud Platform Kubernetes clusters directly from Runbook Automation.
+The GCP GKE (Google Kubernetes Engine) Node Source can be used to dynamically retrieve GKE clusters and add them as nodes to the node inventory. As new clusters are created or removed, the inventory will be automatically updated with clusters represented as nodes.
-### Configuration
+## Configuration
-To configure the GCP GKE Resource Model Source:
+### Prerequisites
-1. In your project, go to "Project Settings" > "Edit Nodes".
-2. Click "Add a new Node Source".
-3. Select "GCP Kubernetes Engine Clusters" from the list of available node sources.
-4. Configure the following settings:
+Before configuring the GCP GKE Node Source, permissions must be allowed for the service account associated with Runbook Automation to list the GKE clusters.
+
+See the [Google Cloud Plugins Overview](/manual/plugins/gcp-plugins-overview.md) for steps on how to associate a service account with Runbook Automation.
-- **Project ID**: The GCP Project ID to use for accessing the GKE clusters.
-- **Region or Zone**: The GCP region or zone where your GKE clusters are located. You can use `*` to include all regions or zones.
-- **Access Key Path**: The Key Storage path for the GCP Access Key credentials.
-- **Use Pod Service Account for Node Steps**: Choose whether to authenticate with the Pod Service Account for Job steps. Set to `True` if Runbook Automation or a Runner is executing within the targeted cluster.
+A predefined role such as **Kubernetes Engine Cluster Viewer** can be used to grant the necessary permissions. This role has the following permissions:
-### Authentication
+- `container.clusters.get`
+- `container.clusters.list`
+- `resourcemanager.projects.get`
+- `resourcemanager.projects.list`
-You can configure GCP credentials at three levels:
+
-1. Resource Model Configuration
-2. Plugin Group Properties
+### Add GKE Node Source in Runbook Automation
-To set up credentials:
+To configure the GCP GKE Node Source plugin:
-1. Create a new Key Storage entry of type 'private key' and upload the gcp-key-file for your GCP credentials file.
-2. In the plugin configuration, provide:
-- GCP Project ID
-- Path to the GCP credentials in Key Storage
-- Region/Zone specification
+1. In your project, go to "Project Settings" > "Edit Nodes".
+
+2. Click "Add a new Node Source".
+3. Select "GCP Kubernetes Engine Clusters" from the list of available node sources.
+4. Configure the following settings:
+
+ - **Project ID**: The GCP Project ID to use for accessing the GKE clusters.
+ - **Region or Zone**: The GCP region or zone where your GKE clusters are located. You can use `-` to include all regions or zones.
+ - **Access Key Path**: The Key Storage path for the GCP Access Key credentials.
+ - :::info GCP Authentication at Project or System Level
+ Authentication for GCP plugins can be configured at the Project or System levels by following the [Google Cloud Plugins Overview](/manual-plugins-gcp-plugins-overview.md). If the GCP authentication is already set in the Project or System Configuration, this field can be left blank.
+ :::
+5. **Use Pod Service Account for Node Steps**: Choose whether to authenticate with the Pod Service Account for Job steps. Set to `True` if Runbook Automation or a Runner is executing within the targeted cluster.
+ :::tip Using Pod Service Account Through Runners
+ This option is useful when you want to dynamically discover clusters using the GKE integration, but have a 1:1 relationship between Runners and clusters or do not have the option to use the cloud provider for retrieving cluster credentials.
+
+ For instructions on how to use the pod service account as well as more detail on the various cluster authentication methods, see the [Kubernetes Plugins Overview](/manual/plugins/kubernetes-plugins-overview.md).
+ :::
### Node Attributes
Each GKE cluster will be represented as a node with the following attributes:
-- `gcp-project-id`: The GCP project ID containing the cluster
- `gcp-location`: The GCP region/zone of the cluster
- `kubernetes-cluster-endpoint`: The API server endpoint of the cluster
-- `kubernetes-use-pod-service-account`: Whether to use pod service account for authentication
-- `kubernetes-cloud-provider`: Set to "gcp-gke"
+- `kubernetes-cloud-provider`: Set to **"gcp-gke"**
-### Authentication Modes
-
-The plugin supports two authentication modes:
-
-1. **GCP API Authentication**: Default mode when `Use Pod Service Account` is set to `false`. Uses GCP credentials for authentication.
-2. **Pod Service Account**: When set to `true`, uses the Kubernetes service account of the pod for authentication. Ideal when Runbook Automation is running within the same cluster.
+
### Troubleshooting
-If you encounter issues:
-
-1. Check the Runbook Automation logs for any error messages.
-2. Verify your GCP credentials and permissions:
-- Ensure the service account has the necessary GKE permissions
-- Verify the credentials file is properly stored in Key Storage
-3. Ensure your GKE cluster is running and accessible.
-4. Check network connectivity between Runbook Automation and your GCP resources.
-5. Verify the correct Project ID and Region/Zone settings.
+#### Node Source Unauthorized Error
+
+**Some Node Source returned an "Unauthorized" message**: This error indicates that the proper ACL permissions are not configured for the node sources within this project to access the necessary secrets within key storage:
+
+
+
+To resolve this issue, add an ACL Policy that grants the necessary permissions to the node sources within this project to access the required secrets:
+Here is an example ACL Policy that grants the `platform-engineering` project access to the `keys/project/platform-engineering` directory within Key Storage:
+```yaml
+by:
+ urn: project:platform-engineering
+context:
+ application: rundeck
+for:
+ storage:
+ - match:
+ path: 'keys/project/platform-engineering/.*'
+ allow: [read]
+description: Allow access to key storage
+```
+
+#### GKE Clusters Not Found
+
+If the GKE clusters have not been added to the node inventory, verify the following:
+
+1. Verify your GCP credentials and permissions:
+ - Ensure the service account has the necessary GKE permissions
+ - Verify the credentials file is properly stored in Key Storage
+2. Ensure your GKE cluster is running and accessible.
+3. Verify the correct Project ID and Region/Zone settings.
+4. If running the Self-Hosted solution, check the Runbook Automation logs for any error messages.
### Additional Resources