Skip to content

Commit 82aa374

Browse files
committed
Add new blog posts: Go profiling guide, Kubernetes ingress for external apps, and Qwen 2.5 Coder models guide.
1 parent 83d5bfe commit 82aa374

File tree

3 files changed

+449
-0
lines changed

3 files changed

+449
-0
lines changed
Lines changed: 166 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,166 @@
1+
---
2+
title: "A Practical Guide to Profiling Go Applications with pprof"
3+
date: 2025-04-18T00:00:00-05:00
4+
draft: false
5+
tags: ["Golang", "Performance", "pprof", "Profiling", "Optimization"]
6+
categories:
7+
- Go Development
8+
- Performance Optimization
9+
author: "Matthew Mattox - mmattox@support.tools"
10+
description: "Learn how to effectively profile and optimize Go applications using pprof with practical examples and visualization techniques."
11+
more_link: "yes"
12+
url: "/golang-pprof-profiling-guide/"
13+
---
14+
15+
Golang's built-in tooling is one of its greatest strengths for developers. While many appreciate `go fmt` for consistent code formatting and `go test` for testing, fewer developers leverage Go's powerful profiling capabilities. This guide demonstrates how to use pprof to identify performance bottlenecks in your Go applications.
16+
17+
<!--more-->
18+
19+
# Go Performance Profiling with pprof: A Practical Guide
20+
21+
## What Makes pprof Valuable
22+
23+
Go's official profiling tool, pprof, provides exceptional insights into your application's performance characteristics with minimal configuration. It offers:
24+
25+
- CPU usage analysis
26+
- Memory allocation profiling
27+
- Blocking operation identification
28+
- Visual representation of performance data
29+
- Minimal runtime overhead
30+
31+
Let's walk through profiling a real application: [dockertags](https://github.com/goodwithtech/dockertags), a tool for listing available Docker image tags.
32+
33+
## Setting Up CPU Profiling in Your Application
34+
35+
Adding profiling to your Go application requires only a few lines of code. Insert the following at the beginning of your `main()` function:
36+
37+
```go
38+
func main() {
39+
// Create CPU profile file
40+
f, err := os.Create("cpu.pprof")
41+
if err != nil {
42+
log.Fatal(err)
43+
}
44+
45+
// Start CPU profiling
46+
pprof.StartCPUProfile(f)
47+
48+
// Ensure profiling stops when the function exits
49+
defer pprof.StopCPUProfile()
50+
51+
// Your existing application code continues here...
52+
}
53+
```
54+
55+
This snippet creates a file named `cpu.pprof` that will store the CPU profiling data while your application runs.
56+
57+
## Building and Running Your Profiled Application
58+
59+
Once you've added the profiling code, build and run your application as usual:
60+
61+
```bash
62+
# Build the application
63+
$ go build -o profiled-app ./cmd/myapp
64+
65+
# Run the application with normal workload
66+
$ ./profiled-app [normal arguments]
67+
```
68+
69+
After your application completes its work, you'll find a `cpu.pprof` file in your current directory. This file contains all the profiling data collected during execution.
70+
71+
## Analyzing Profile Data
72+
73+
There are two main approaches to analyzing the collected profile data: web-based visualization and command-line inspection.
74+
75+
### Web-Based Visualization (Recommended)
76+
77+
The web interface provides interactive flame graphs and visualization options that make performance bottlenecks immediately obvious:
78+
79+
```bash
80+
$ go tool pprof -http=":8000" profiled-app ./cpu.pprof
81+
Serving web UI on http://localhost:8000
82+
```
83+
84+
This command starts a local web server on port 8000. Open your browser and navigate to `http://localhost:8000` to explore the profile data.
85+
86+
The web interface offers several visualization options:
87+
88+
1. **Flame Graph**: The most intuitive view for understanding call hierarchies and CPU consumption
89+
2. **Graph**: Shows function relationships with proportional box sizes
90+
3. **Top**: Lists functions by resource consumption
91+
4. **Source**: Links profiling data to source code when available
92+
93+
To access the flame graph, select "Flame Graph" from the "VIEW" dropdown in the interface header. The wider the function's bar in the graph, the more CPU time it consumed.
94+
95+
### Command-Line Analysis
96+
97+
For quick analysis or when working remotely, the command-line interface provides powerful inspection tools:
98+
99+
```bash
100+
$ go tool pprof profiled-app cpu.pprof
101+
File: profiled-app
102+
Type: cpu
103+
Time: Apr 17, 2025 at 9:39pm (EST)
104+
Duration: 3.12s, Total samples = 85ms (2.72%)
105+
Entering interactive mode (type "help" for commands, "o" for options)
106+
(pprof)
107+
```
108+
109+
The most useful commands include:
110+
111+
- `top`: Displays functions consuming the most resources
112+
- `tree`: Shows the call hierarchy with resource consumption
113+
- `list [function]`: Shows line-by-line profiling data for a specific function
114+
- `web`: Generates a visual graph and opens it in your browser
115+
- `svg`: Outputs a visualization in SVG format
116+
117+
Example `top` command output:
118+
119+
```
120+
(pprof) top
121+
Showing nodes accounting for 85ms, 100% of 85ms total
122+
Showing top 10 nodes out of 42
123+
flat flat% sum% cum cum%
124+
52ms 61.18% 61.18% 52ms 61.18% runtime.cgocall
125+
23ms 27.06% 88.24% 23ms 27.06% runtime.madvise
126+
10ms 11.76% 100% 10ms 11.76% crypto/elliptic.p256Sqr
127+
0 0% 100% 10ms 11.76% crypto/elliptic.(*p256Point).p256BaseMult
128+
0 0% 100% 10ms 11.76% crypto/elliptic.GenerateKey
129+
0 0% 100% 52ms 61.18% crypto/tls.(*Conn).Handshake
130+
```
131+
132+
## Interpreting Profile Results
133+
134+
When analyzing your profile data, look for:
135+
136+
1. **Functions with high cumulative time**: These are functions that, including their children, consume significant resources.
137+
2. **Functions with high flat time**: These functions directly consume significant resources without accounting for called functions.
138+
3. **Unexpected CPU hotspots**: Areas where CPU usage is disproportionate to the expected workload.
139+
140+
In our example application, we can see significant time spent in TLS handshakes and cryptographic operations, suggesting network security operations may be a bottleneck.
141+
142+
## Beyond CPU Profiling
143+
144+
While this guide focused on CPU profiling, pprof supports multiple profiling types:
145+
146+
- **Memory profiling**: Add `defer profile.WriteHeapProfile(f)` to capture memory allocation patterns
147+
- **Block profiling**: Use `runtime.SetBlockProfileRate()` to profile goroutine blocking
148+
- **Mutex profiling**: Enable with `runtime.SetMutexProfileFraction()` to find lock contention
149+
150+
## Practical Optimization Tips
151+
152+
After identifying bottlenecks with pprof, consider these optimization strategies:
153+
154+
1. **Reduce allocations**: Minimize garbage collection pressure by reusing objects
155+
2. **Parallelize CPU-bound operations**: Use goroutines for compute-intensive tasks
156+
3. **Buffer I/O operations**: Batch network and disk operations to reduce syscall overhead
157+
4. **Cache expensive computations**: Store results of functions that are called repeatedly
158+
5. **Use sync.Pool**: For frequently allocated and reclaimed objects
159+
160+
## Conclusion
161+
162+
Profiling is an essential practice for developing high-performance Go applications. With pprof's minimal setup requirements and powerful visualization capabilities, there's no reason not to integrate profiling into your development workflow.
163+
164+
By regularly profiling your Go code, you can make data-driven optimization decisions that target actual bottlenecks rather than perceived ones. This approach leads to more efficient applications and a better understanding of your code's runtime characteristics.
165+
166+
For more Go performance techniques, explore our other guides on benchmarking, concurrency patterns, and efficient data structures.
Lines changed: 184 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,184 @@
1+
---
2+
title: "Kubernetes Ingress Hack: Managing Domain Names and TLS for External Applications"
3+
date: 2025-05-01T00:00:00-05:00
4+
draft: false
5+
tags: ["Kubernetes", "Ingress", "TLS", "External Services", "DevOps", "Cert-Manager"]
6+
categories:
7+
- Kubernetes
8+
- Security
9+
author: "Matthew Mattox - mmattox@support.tools"
10+
description: "Learn how to leverage Kubernetes Ingress controllers to provide domain names and automated TLS certificates for applications running outside your cluster."
11+
more_link: "yes"
12+
url: "/kubernetes-ingress-external-apps/"
13+
---
14+
15+
Kubernetes excels at managing domains and TLS certificates for applications running inside the cluster, but what about your legacy applications, standalone Docker containers, or specialized hardware that lives outside Kubernetes? This practical guide demonstrates a clever technique to extend Kubernetes' powerful Ingress capabilities to any external application.
16+
17+
<!--more-->
18+
19+
# Extending Kubernetes Ingress to Non-Cluster Applications
20+
21+
## The Challenge: Managing External Application Access
22+
23+
While Kubernetes provides robust solutions for traffic routing, TLS certificate management, and domain configuration for workloads running inside the cluster, many organizations maintain a mix of infrastructure:
24+
25+
- Legacy applications not suitable for containerization
26+
- Standalone services running on VMs or EC2 instances
27+
- Specialized hardware with custom requirements
28+
- Third-party services requiring secure access
29+
30+
Without a centralized approach, teams face several challenges:
31+
32+
1. Manual certificate management and renewal
33+
2. Inconsistent domain configuration across environments
34+
3. Complex DNS management
35+
4. Higher security risks from expired certificates or misconfigurations
36+
5. Operational overhead managing multiple ingress solutions
37+
38+
## The Solution: Kubernetes as a Smart Proxy
39+
40+
By leveraging Kubernetes' ability to define Services without selectors and manually specifying Endpoints, we can create a solution that:
41+
42+
- Uses Kubernetes Ingress for routing to external applications
43+
- Leverages cert-manager for automatic TLS certificate provisioning
44+
- Centralizes domain management in your existing Kubernetes tooling
45+
- Requires zero modifications to the external applications
46+
47+
## Implementation in Three Simple Steps
48+
49+
### Step 1: Define a Service with Manual Endpoints
50+
51+
Unlike typical Kubernetes Services that automatically discover pods via selectors, we'll create a Service without selectors and manually define the Endpoints:
52+
53+
```yaml
54+
apiVersion: v1
55+
kind: Service
56+
metadata:
57+
name: external-app-service
58+
namespace: default
59+
spec:
60+
ports:
61+
- port: 80
62+
targetPort: 8080
63+
---
64+
apiVersion: v1
65+
kind: Endpoints
66+
metadata:
67+
name: external-app-service
68+
namespace: default
69+
subsets:
70+
- addresses:
71+
- ip: 10.240.0.129 # Your external application's IP address
72+
ports:
73+
- port: 8080 # The port your application listens on
74+
```
75+
76+
This configuration tells Kubernetes where to find your external application and how to route traffic to it.
77+
78+
### Step 2: Set Up Automated TLS Certificate Management
79+
80+
Leverage cert-manager to automatically obtain and renew certificates from Let's Encrypt:
81+
82+
```yaml
83+
apiVersion: cert-manager.io/v1
84+
kind: Certificate
85+
metadata:
86+
name: external-app-cert
87+
namespace: default
88+
spec:
89+
secretName: external-app-tls
90+
issuerRef:
91+
name: letsencrypt-prod
92+
kind: ClusterIssuer
93+
dnsNames:
94+
- external-app.yourdomain.com
95+
```
96+
97+
This assumes you've already [installed cert-manager](https://cert-manager.io/docs/installation/) and configured a ClusterIssuer for Let's Encrypt.
98+
99+
### Step 3: Configure the Ingress Resource
100+
101+
Finally, create an Ingress resource to route traffic from your domain to the external service:
102+
103+
```yaml
104+
apiVersion: networking.k8s.io/v1
105+
kind: Ingress
106+
metadata:
107+
name: external-app-ingress
108+
namespace: default
109+
annotations:
110+
kubernetes.io/ingress.class: "nginx" # Use your Ingress controller class
111+
spec:
112+
tls:
113+
- hosts:
114+
- external-app.yourdomain.com
115+
secretName: external-app-tls
116+
rules:
117+
- host: external-app.yourdomain.com
118+
http:
119+
paths:
120+
- path: /
121+
pathType: Prefix
122+
backend:
123+
service:
124+
name: external-app-service
125+
port:
126+
number: 80
127+
```
128+
129+
## How It Works: The Technical Details
130+
131+
This solution creates a seamless bridge between Kubernetes and your external applications:
132+
133+
1. **Traffic Flow**: External requests hit your Kubernetes Ingress controller
134+
2. **TLS Termination**: The Ingress controller handles TLS termination using the certificate from cert-manager
135+
3. **Routing**: Traffic is forwarded to the Service, which has no pod selectors
136+
4. **Service to Endpoint**: Since there are no matching pods, Kubernetes uses the manually defined Endpoints
137+
5. **Final Delivery**: Traffic reaches your external application with the correct host headers and paths
138+
139+
## Advantages of This Approach
140+
141+
Centralizing your ingress through Kubernetes provides several key benefits:
142+
143+
1. **Unified Certificate Management**: All TLS certificates managed in one place
144+
2. **Automatic Renewal**: No more expired certificate emergencies
145+
3. **Consistent Configuration**: Same Ingress patterns for all applications
146+
4. **Easy Migration Path**: Simplifies eventual migration to containerized workloads
147+
5. **Existing Tooling**: Leverage familiar Kubernetes tools and monitoring
148+
6. **Security Standardization**: Consistent TLS configurations across all services
149+
150+
## Practical Use Cases
151+
152+
This technique proves valuable in numerous real-world scenarios:
153+
154+
### Legacy Application Integration
155+
Connect mission-critical legacy applications to your modern infrastructure without modification.
156+
157+
### Hybrid Cloud Management
158+
Maintain consistent access patterns across on-premises and cloud resources.
159+
160+
### Hardware Appliance Access
161+
Provide secure access to specialized hardware devices through your standard Kubernetes gateway.
162+
163+
### Development Environments
164+
Create consistent access to development tools that may run outside the cluster.
165+
166+
### Third-Party Service Integration
167+
Provide a unified interface for accessing both internal and external services.
168+
169+
## Implementation Considerations
170+
171+
While this approach is powerful, keep these factors in mind:
172+
173+
- **Network Connectivity**: Ensure your Kubernetes nodes can reach the external application IP
174+
- **Health Checks**: Consider implementing custom health checks for the external service
175+
- **Security**: Remember the traffic between Kubernetes and the external application is unencrypted unless you configure additional TLS
176+
- **IP Changes**: If your external application's IP changes, you'll need to update the Endpoints resource
177+
178+
## Conclusion
179+
180+
This Kubernetes ingress hack for external applications demonstrates the flexibility of Kubernetes as a platform. By treating Kubernetes as a smart proxy with automated certificate management, you can significantly simplify your overall architecture while maintaining robust security practices.
181+
182+
Whether you're managing a hybrid infrastructure, transitioning to containers, or simply need to maintain legacy systems alongside modern applications, this approach provides a practical solution that leverages your existing Kubernetes investment.
183+
184+
For complex environments, consider implementing this pattern with Helm charts or GitOps workflows to ensure consistent configuration and easy management across multiple external applications and environments.

0 commit comments

Comments
 (0)