You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs/en/guides/cost/kubecost.md
+6-5Lines changed: 6 additions & 5 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,4 +1,4 @@
1
-
# Cost visibility and resource right-sizing using Kubecost
1
+
# Using Kubecost
2
2
Kubecost provides customers with visibility into spend and resource efficiency in Kubernetes environments. At a high level, Amazon EKS cost monitoring is deployed with Kubecost, which includes Prometheus, an open-source monitoring system and time series database. Kubecost reads metrics from Prometheus then performs cost allocation calculations and writes the metrics back to Prometheus. Finally, the Kubecost front end reads metrics from Prometheus and shows them on the Kubecost user interface (UI). The architecture is illustrated by the following diagram:
@@ -16,8 +16,9 @@ A downside of the above two approach is that you may end up with unused capacity
16
16
17
17
The most efficient way to track costs in multi-tenant Kubernetes clusters is to distribute incurred costs based on the amount of resources consumed by workloads. This pattern allows you to maximize the utilization of your EC2 instances because different workloads can share nodes, which allows you to increase the pod-density on your nodes. However, calculating costs by workload or namespaces is a challenging task. Understanding the cost-responsibility of a workload requires aggregating all the resources consumed or reserved during a time-frame, and evaluating the charges based on the cost of the resource and the duration of the usage. This is the exact challenge that Kubecost is dedicated to tackling.
18
18
19
-
!!! tip
19
+
:::tip
20
20
Take a look at our [One Observability Workshop](https://catalog.workshops.aws/observability/en-US/aws-managed-oss/amp/ingest-kubecost-metrics) to get a hands-on experience on Kubecost.
21
+
:::
21
22
22
23
## Recommendations
23
24
### Cost Allocation
@@ -49,7 +50,7 @@ So, idle costs can also be thought of as the cost of the space that the Kubernet
49
50
50
51
### Network Cost
51
52
52
-
Kubecost uses best-effort to allocate network transfer costs to the workloads generating those costs. The accurate way of determining the network cost is by using the combination of [AWS Cloud Integration](https://docs.kubecost.com/install-and-configure/install/cloud-integration/aws-cloud-integrations) and [Network costs daemonset](https://docs.kubecost.com/install-and-configure/advanced-configuration/network-costs-configuration).
53
+
Kubecost uses best-effort to allocate network transfer costs to the workloads generating those costs. The accurate way of determining the network cost is by using the combination of [AWS Cloud Integration](https://www.ibm.com/docs/en/kubecost/self-hosted/3.x?topic=integration-aws-cloud-using-irsaeks-pod-identities) and [Network costs daemonset](https://docs.kubecost.com/install-and-configure/advanced-configuration/network-costs-configuration).
53
54
54
55
You would want to take into account your efficiency score and Idle cost to fine tune the workloads to ensure you utilize the cluster to its complete potential. This takes us to the next topic namely Cluster right-sizing.
Kubecost provides a web dashboard that you can access either through kubectl port-forward, an ingress, or a load balancer. The enterprise version of Kubecost also supports restricting access to the dashboard using [SSO/SAML](https://docs.kubecost.com/install-and-configure/advanced-configuration/user-management-oidc) and providing varying level of access. For example, restricting team’s view to only the products they are responsible for.
117
+
Kubecost provides a web dashboard that you can access either through kubectl port-forward, an ingress, or a load balancer. The enterprise version of Kubecost also supports restricting access to the dashboard using [SSO/SAML](https://www.ibm.com/docs/en/kubecost/self-hosted/3.x?topic=configuration-user-management-oidc) and providing varying level of access. For example, restricting team’s view to only the products they are responsible for.
117
118
118
119
In AWS environment, consider using the [AWS Load Balancer Controller](https://docs.aws.amazon.com/eks/latest/userguide/aws-load-balancer-controller.html) to expose Kubecost and use [Amazon Cognito](https://aws.amazon.com/cognito/) for authentication, authorization, and user management. You can learn more on this [How to use Application Load Balancer and Amazon Cognito to authenticate users for your Kubernetes web apps](https://aws.amazon.com/blogs/containers/how-to-use-application-load-balancer-and-amazon-cognito-to-authenticate-users-for-your-kubernetes-web-apps/)
Copy file name to clipboardExpand all lines: docs/en/guides/operational/gitops-with-amg/gitops-with-amg.md
+2-2Lines changed: 2 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -25,13 +25,13 @@ The [grafana-operator](https://github.com/grafana-operator/grafana-operator#:~:t
25
25
26
26
GitOps is a software development and operations methodology that uses Git as the source of truth for deployment configurations. It involves keeping the desired state of an application or infrastructure in a Git repository and using Git-based workflows to manage and deploy changes. GitOps is a way of managing application and infrastructure deployment so that the whole system is described declaratively in a Git repository. It is an operational model that offers you the ability to manage the state of multiple Kubernetes clusters leveraging the best practices of version control, immutable artifacts, and automation.
27
27
28
-
Flux is a GitOps tool that automates the deployment of applications on Kubernetes. It works by continuously monitoring the state of a Git repository and applying any changes to a cluster. Flux integrates with various Git providers such as GitHub, [GitLab](https://dzone.com/articles/auto-deploy-spring-boot-app-using-gitlab-cicd), and Bitbucket. When changes are made to the repository, Flux automatically detects them and updates the cluster accordingly.
28
+
Flux is a GitOps tool that automates the deployment of applications on Kubernetes. It works by continuously monitoring the state of a Git repository and applying any changes to a cluster. Flux integrates with various Git providers such as GitHub, [GitLab](https://dzone.com/articles/auto-deploy-spring-boot-app-using-gitlab-cicd/), and Bitbucket. When changes are made to the repository, Flux automatically detects them and updates the cluster accordingly.
29
29
30
30
### Advantages of using Flux
31
31
32
32
***Automated deployments**: Flux automates the deployment process, reducing manual errors and freeing up developers to focus on other tasks.
33
33
***Git-based workflow**: Flux leverages Git as a source of truth, which makes it easier to track and revert changes.
34
-
***Declarative configuration**: Flux uses [Kubernetes](https://dzone.com/articles/kubernetes-full-stack-example-with-kong-ingress-co) manifests to define the desired state of a cluster, making it easier to manage and track changes.
34
+
***Declarative configuration**: Flux uses [Kubernetes](https://dzone.com/articles/kubernetes-full-stack-example-with-kong-ingress-co/) manifests to define the desired state of a cluster, making it easier to manage and track changes.
Copy file name to clipboardExpand all lines: docs/en/guides/serverless/aws-native/lambda-based-observability.md
+6-11Lines changed: 6 additions & 11 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -179,15 +179,17 @@ One way to publish custom metrics to AWS CloudWatch is by calling CloudWatch met
179
179
To achieve this, you can generate the logs using [EMF specification](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/CloudWatch_Embedded_Metric_Format_Specification.html), and send them to CloudWatch using `PutLogEvents` API. To simplify the process, there are **two client libraries that support the creation of metrics in the EMF****format**.
### **Use [CloudWatch Lambda Insights](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/Lambda-Insights.html) to monitor system-level metrics**
186
186
187
187
CloudWatch Lambda insights provides you system-level metrics, including CPU time, memory usage, disk utilization, and network performance. Lambda Insights also collects, aggregates, and summarizes diagnostic information, such as **`cold starts`** and Lambda worker shutdowns. Lambda Insights leverages CloudWatch Lambda extension, which is packaged as a Lambda layer. Once enabled, it collects system-level metrics and emits a single performance log event to CloudWatch Logs for every invocation of that Lambda function in the embedded metrics format.
188
188
189
-
!!! note
189
+
:::note
190
190
CloudWatch Lambda Insights is not enabled by default and needs to be turned on per Lambda function.
191
+
:::
192
+
191
193
You can enable it via AWS console or via Infrastructure as Code (IaC). Here is an example of how to enable it using the AWS serverless application model (SAM). You add `LambdaInsightsExtension` extension Layer to your Lambda function, and also add managed IAM policy `CloudWatchLambdaInsightsExecutionRolePolicy`, which gives permissions to your Lambda function to create log stream and call `PutLogEvents` API to be able to write logs to it.
To instrument individual clients wrap your AWS SDK client in a call to `AWSXRay.captureAWSClient`. Do not use both `captureAWS` and `captureAWSClient` together. This will lead to duplicate traces.
In this observability best practice guide for AWS Lambda based serverless application, we highlighted critical aspects such as logging, metrics and tracing using Native AWS services such as Amazon CloudWatch and AWS X-Ray. We recommended using AWS Lambda Powertools library to easily add observability best practices to your application. By adopting these best practices, you can unlock valuable insights into your serverless application, enabling faster error detection and performance optimization.
315
317
316
318
For further deep dive, we would highly recommend you to practice AWS Native Observability module of AWS [One Observability Workshop](https://catalog.workshops.aws/observability/en-US).
A curated collection of resources to help you learn and implement AWS observability best practices.
4
+
5
+
## Learning Resources
6
+
7
+
### AWS Cloud Operations Show
8
+
9
+
Collection of live stream recordings where we dive deep into interesting AWS Cloud Operations features and use cases. This video series provides practical demonstrations and expert insights into cloud operations.
10
+
11
+
[Watch on YouTube](https://www.youtube.com/playlist?list=PLehXSATXjcQHj8bPSf0uZuQBoxJ7a7ag7)
12
+
13
+
### One Observability Workshop
14
+
15
+
This hands-on workshop provides practical experience with the wide variety of tool sets AWS offers for monitoring and observability. The workshop is highly customizable - you can pick and choose any module based on your interests and availability.
16
+
17
+
[Start the Workshop](https://catalog.workshops.aws/observability/)
18
+
19
+
!!! tip
20
+
The One Observability Workshop is perfect for teams looking to get hands-on experience with AWS observability tools in a structured learning environment.
21
+
22
+
## Demos and Examples
23
+
24
+
### Application Signals Demo
25
+
26
+
A complete demonstration application showcasing AWS Application Signals capabilities. This demo provides a working example (the Pet Clinic) which you can deploy and explore to understand how Application Signals works in practice.
27
+
28
+
[View on GitHub](https://github.com/aws-observability/application-signals-demo)
29
+
30
+
## Documentation
31
+
32
+
### AWS Observability Best Practices Guide
33
+
34
+
The home page of this Observability Best Practices guide. This comprehensive resource covers best practices, patterns, and recipes for implementing observability on AWS.
35
+
36
+
[Visit the Guide](https://aws-observability.github.io/observability-best-practices/)
37
+
38
+
### AWS Distro for OpenTelemetry (ADOT)
39
+
40
+
Comprehensive guides on how to instrument your code with ADOT. Learn how to collect distributed traces and metrics from your applications using the AWS-supported distribution of OpenTelemetry.
Copy file name to clipboardExpand all lines: docusaurus/docs/guides/cost/kubecost.md
+2-2Lines changed: 2 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -50,7 +50,7 @@ So, idle costs can also be thought of as the cost of the space that the Kubernet
50
50
51
51
### Network Cost
52
52
53
-
Kubecost uses best-effort to allocate network transfer costs to the workloads generating those costs. The accurate way of determining the network cost is by using the combination of [AWS Cloud Integration](https://docs.kubecost.com/install-and-configure/install/cloud-integration/aws-cloud-integrations) and [Network costs daemonset](https://docs.kubecost.com/install-and-configure/advanced-configuration/network-costs-configuration).
53
+
Kubecost uses best-effort to allocate network transfer costs to the workloads generating those costs. The accurate way of determining the network cost is by using the combination of [AWS Cloud Integration](https://www.ibm.com/docs/en/kubecost/self-hosted/3.x?topic=integration-aws-cloud-using-irsaeks-pod-identities) and [Network costs daemonset](https://docs.kubecost.com/install-and-configure/advanced-configuration/network-costs-configuration).
54
54
55
55
You would want to take into account your efficiency score and Idle cost to fine tune the workloads to ensure you utilize the cluster to its complete potential. This takes us to the next topic namely Cluster right-sizing.
Kubecost provides a web dashboard that you can access either through kubectl port-forward, an ingress, or a load balancer. The enterprise version of Kubecost also supports restricting access to the dashboard using [SSO/SAML](https://docs.kubecost.com/install-and-configure/advanced-configuration/user-management-oidc) and providing varying level of access. For example, restricting team’s view to only the products they are responsible for.
117
+
Kubecost provides a web dashboard that you can access either through kubectl port-forward, an ingress, or a load balancer. The enterprise version of Kubecost also supports restricting access to the dashboard using [SSO/SAML](https://www.ibm.com/docs/en/kubecost/self-hosted/3.x?topic=configuration-user-management-oidc) and providing varying level of access. For example, restricting team’s view to only the products they are responsible for.
118
118
119
119
In AWS environment, consider using the [AWS Load Balancer Controller](https://docs.aws.amazon.com/eks/latest/userguide/aws-load-balancer-controller.html) to expose Kubecost and use [Amazon Cognito](https://aws.amazon.com/cognito/) for authentication, authorization, and user management. You can learn more on this [How to use Application Load Balancer and Amazon Cognito to authenticate users for your Kubernetes web apps](https://aws.amazon.com/blogs/containers/how-to-use-application-load-balancer-and-amazon-cognito-to-authenticate-users-for-your-kubernetes-web-apps/)
Copy file name to clipboardExpand all lines: docusaurus/docs/guides/operational/gitops-with-amg/gitops-with-amg.md
+2-2Lines changed: 2 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -25,13 +25,13 @@ The [grafana-operator](https://github.com/grafana-operator/grafana-operator#:~:t
25
25
26
26
GitOps is a software development and operations methodology that uses Git as the source of truth for deployment configurations. It involves keeping the desired state of an application or infrastructure in a Git repository and using Git-based workflows to manage and deploy changes. GitOps is a way of managing application and infrastructure deployment so that the whole system is described declaratively in a Git repository. It is an operational model that offers you the ability to manage the state of multiple Kubernetes clusters leveraging the best practices of version control, immutable artifacts, and automation.
27
27
28
-
Flux is a GitOps tool that automates the deployment of applications on Kubernetes. It works by continuously monitoring the state of a Git repository and applying any changes to a cluster. Flux integrates with various Git providers such as GitHub, [GitLab](https://dzone.com/articles/auto-deploy-spring-boot-app-using-gitlab-cicd), and Bitbucket. When changes are made to the repository, Flux automatically detects them and updates the cluster accordingly.
28
+
Flux is a GitOps tool that automates the deployment of applications on Kubernetes. It works by continuously monitoring the state of a Git repository and applying any changes to a cluster. Flux integrates with various Git providers such as GitHub, [GitLab](https://dzone.com/articles/auto-deploy-spring-boot-app-using-gitlab-cicd/), and Bitbucket. When changes are made to the repository, Flux automatically detects them and updates the cluster accordingly.
29
29
30
30
### Advantages of using Flux
31
31
32
32
***Automated deployments**: Flux automates the deployment process, reducing manual errors and freeing up developers to focus on other tasks.
33
33
***Git-based workflow**: Flux leverages Git as a source of truth, which makes it easier to track and revert changes.
34
-
***Declarative configuration**: Flux uses [Kubernetes](https://dzone.com/articles/kubernetes-full-stack-example-with-kong-ingress-co) manifests to define the desired state of a cluster, making it easier to manage and track changes.
34
+
***Declarative configuration**: Flux uses [Kubernetes](https://dzone.com/articles/kubernetes-full-stack-example-with-kong-ingress-co/) manifests to define the desired state of a cluster, making it easier to manage and track changes.
Copy file name to clipboardExpand all lines: docusaurus/docs/guides/serverless/aws-native/lambda-based-observability.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -179,7 +179,7 @@ One way to publish custom metrics to AWS CloudWatch is by calling CloudWatch met
179
179
To achieve this, you can generate the logs using [EMF specification](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/CloudWatch_Embedded_Metric_Format_Specification.html), and send them to CloudWatch using `PutLogEvents` API. To simplify the process, there are **two client libraries that support the creation of metrics in the EMF****format**.
0 commit comments