Skip to content

Commit 0c312bb

Browse files
Merge branch 'post-10'
2 parents a0b25b0 + 4e896c0 commit 0c312bb

File tree

7 files changed

+336
-0
lines changed

7 files changed

+336
-0
lines changed
Lines changed: 330 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,330 @@
1+
<script lang="ts">
2+
import { posts, type post } from '../posts';
3+
import PageSubtitle from '../../../../components/pageSubtitle.svelte';
4+
import PageLayout from '../../../../components/layout/pageLayout.svelte';
5+
import PageHeader from '../../../../components/pageHeader.svelte';
6+
import PageParagraph from '../../../../components/pageParagraph.svelte';
7+
import { base } from '$app/paths';
8+
import Code from '../../../../components/code.svelte';
9+
10+
let p: post = posts[9]; // 0 based post 10
11+
let title = p.title;
12+
let date = p.date;
13+
let backText = 'blog';
14+
let backHref = '/blog';
15+
16+
let debugMux = `
17+
import (
18+
"net/http"
19+
"net/http/pprof"
20+
21+
"github.com/arl/statsviz"
22+
)
23+
24+
func Mux() *http.ServeMux {
25+
mux := http.NewServeMux()
26+
27+
mux.HandleFunc("/debug/pprof/", pprof.Index)
28+
mux.HandleFunc("/debug/pprof/cmdline", pprof.Cmdline)
29+
mux.HandleFunc("/debug/pprof/profile", pprof.Profile)
30+
mux.HandleFunc("/debug/pprof/symbol", pprof.Symbol)
31+
mux.HandleFunc("/debug/pprof/trace", pprof.Trace)
32+
33+
statsviz.Register(mux)
34+
35+
// Liveness handler
36+
mux.HandleFunc("/liveness", func(w http.ResponseWriter, r *http.Request) {
37+
// simulate IO work
38+
time.Sleep(time.Millisecond * 10)
39+
fmt.Fprintln(w, "OK")
40+
})
41+
42+
return mux
43+
}
44+
`;
45+
46+
let serve = `
47+
func main() {
48+
if err := http.ListenAndServe("0.0.0.0:3010", Mux()); err != nil {
49+
fmt.Println("Server Error")
50+
}
51+
}
52+
`;
53+
54+
let burst = `hey -n 10000 -c 100 http://localhost:3010/liveness`;
55+
</script>
56+
57+
<PageLayout {backHref} {backText} {title} {date}>
58+
<PageSubtitle className="underline underline-offset-8 decoration-sky-500 capitalize"
59+
>How to Use Go’s Built-In Profiling Tools to Monitor and Debug Your Services</PageSubtitle
60+
>
61+
62+
<img src="{base}/blog/10/statsviz.png" alt="cert" />
63+
64+
<PageParagraph>
65+
Code on <a href="https://github.com/gradientsearch/go-service-profiling" class="underline underline-offset-8 text-sky-500">GitHub</a>
66+
</PageParagraph>
67+
<PageParagraph>
68+
In this post, we’ll walk through setting up debug endpoints to profile your Go application in
69+
real time. We'll also integrate <a href="https://github.com/arl/statsviz" class="text-sky-500"
70+
>Statsviz</a
71+
> to provide a user-friendly GUI for visualizing metrics like heap usage and goroutine activity.
72+
To make things more interesting, we’ll simulate load by sending a burst of requests to a liveness
73+
endpoint, allowing us to see how the heap and goroutine count respond under pressure.
74+
</PageParagraph>
75+
76+
<PageParagraph>
77+
The first step is to register the pprof debug endpoints with your Go HTTP server. These handlers
78+
expose runtime profiling data such as heap usage, goroutines, CPU profiles, and more. Go
79+
provides this functionality through the net/http/pprof package, which can be easily integrated
80+
into your server. We’ll also hook in the Statsviz package to enhance this setup with a
81+
real-time, browser-based dashboard for visualizing metrics.
82+
</PageParagraph>
83+
84+
<Code code={debugMux} lang="go"></Code>
85+
<PageParagraph>
86+
With your handlers configured, the next step is simply to serve them using Go’s built-in HTTP
87+
server:
88+
</PageParagraph>
89+
90+
<Code code={serve} lang="go"></Code>
91+
92+
<PageParagraph>
93+
Next, we'll simulate a burst of traffic using the following command. This will send 100
94+
concurrent request to the liveness endpoint.
95+
</PageParagraph>
96+
97+
<Code code={burst} lang="bash"></Code>
98+
99+
<PageParagraph>Before burst graph:</PageParagraph>
100+
101+
<img src="{base}/blog/10/heap-graphs.png" alt="heap graphs" />
102+
103+
<PageParagraph>After burst graphs:</PageParagraph>
104+
105+
<img src="{base}/blog/10/heap-details.png" alt="heap details" />
106+
<img src="{base}/blog/10/heap-global.png" alt="heap global" />
107+
<img src="{base}/blog/10/goroutines.png" alt="goroutines" />
108+
109+
<PageParagraph>
110+
From the graphs, you can see that the burst of requests put pressure on the heap, increasing the
111+
heap goal from 4MB to over 6MB. There's also a noticeable spike in goroutines—from 6 to just
112+
over 100—which makes sense given that our command triggered 100 concurrent requests.
113+
</PageParagraph>
114+
115+
<PageParagraph>
116+
Using profiling tools like pprof and Statsviz helps us visualize and understand how our
117+
application behaves under load. In general, a consistent and stable heap graph is desirable, as
118+
it indicates efficient memory usage and garbage collection. Spikes, especially under heavy
119+
traffic, are expected—but persistent growth or erratic behavior could signal memory leaks,
120+
inefficient object lifetimes, or other performance issues worth investigating.
121+
</PageParagraph>
122+
123+
<h2>⚠️ Warning</h2>
124+
125+
<PageParagraph>
126+
<strong>Do not expose the debug endpoints to the public or production environments.</strong> The
127+
profiling endpoints provided by the <code>net/http/pprof</code> package expose sensitive runtime
128+
information that could potentially be used for malicious purposes, including heap dumps, goroutine
129+
stacks, and other performance data. These endpoints should be restricted to trusted internal users
130+
or disabled in production.
131+
</PageParagraph>
132+
133+
<PageParagraph>To avoid exposing these endpoints, you can:</PageParagraph>
134+
<PageParagraph>
135+
<ul>
136+
<li>Bind the server to <code>localhost</code> or an internal network interface.</li>
137+
<li>
138+
Use environment variables or build tags to conditionally include profiling in development
139+
environments only.
140+
</li>
141+
<li>
142+
Protect the endpoints with authentication or access control mechanisms (e.g., basic auth or
143+
IP whitelisting).
144+
</li>
145+
</ul>
146+
</PageParagraph>
147+
<PageParagraph>
148+
Be cautious when running this in a production environment and always ensure these endpoints are
149+
not publicly accessible.
150+
</PageParagraph>
151+
152+
153+
<h2>Metric Descriptions</h2>
154+
155+
<PageParagraph>
156+
Below are descriptions of the values shown in the legends of the Statsviz graphs.
157+
</PageParagraph>
158+
159+
<table border="1" cellpadding="8" cellspacing="0">
160+
<thead>
161+
<tr>
162+
<th>Metric</th>
163+
<th>Description</th>
164+
</tr>
165+
</thead>
166+
<tbody>
167+
<tr
168+
><td>Heap in use</td><td
169+
>Memory occupied by live objects and dead objects not yet freed by the GC: <code
170+
>/memory/classes/heap/objects + /memory/classes/heap/unused</code
171+
>.</td
172+
></tr
173+
>
174+
<tr
175+
><td>Heap free</td><td
176+
>Memory that could be returned to the OS but hasn't been: <code
177+
>/memory/classes/heap/free</code
178+
>.</td
179+
></tr
180+
>
181+
<tr
182+
><td>Heap released</td><td
183+
>Memory that has been returned to the OS: <code>/memory/classes/heap/free</code>.</td
184+
></tr
185+
>
186+
<tr
187+
><td>Heap sys</td><td
188+
>Estimate of all heap memory obtained from the OS: <code
189+
>/memory/classes/heap/objects + unused + released + free</code
190+
>.</td
191+
></tr
192+
>
193+
<tr
194+
><td>Heap objects</td><td
195+
>Memory used by live and not-yet-freed objects: <code>/memory/classes/heap/objects</code
196+
>.</td
197+
></tr
198+
>
199+
<tr
200+
><td>Heap stacks</td><td
201+
>Memory used for stack space: <code>/memory/classes/heap/stacks</code>.</td
202+
></tr
203+
>
204+
<tr
205+
><td>Heap goal</td><td
206+
>Heap size target for the end of the GC cycle: <code>gc/heap/goal</code>.</td
207+
></tr
208+
>
209+
<tr
210+
><td>Live objects</td><td
211+
>Number of live or unswept objects occupying heap memory: <code>/gc/heap/objects</code
212+
>.</td
213+
></tr
214+
>
215+
<tr
216+
><td>Live bytes</td><td
217+
>Currently allocated bytes (not yet GC’ed): <code>/gc/heap/allocs - /gc/heap/frees</code
218+
>.</td
219+
></tr
220+
>
221+
<tr
222+
><td>Mspan in-use</td><td
223+
>Memory used by active <code>mspan</code> structures:
224+
<code>/memory/classes/metadata/mspan/inuse</code>.</td
225+
></tr
226+
>
227+
<tr
228+
><td>Mspan free</td><td
229+
>Reserved but unused memory for <code>mspan</code>:
230+
<code>/memory/classes/metadata/mspan/free</code>.</td
231+
></tr
232+
>
233+
<tr
234+
><td>Mcache in-use</td><td
235+
>Memory used by active <code>mcache</code> structures:
236+
<code>/memory/classes/metadata/mcache/inuse</code>.</td
237+
></tr
238+
>
239+
<tr
240+
><td>Mcache free</td><td
241+
>Reserved but unused memory for <code>mcache</code>:
242+
<code>/memory/classes/metadata/mcache/free</code>.</td
243+
></tr
244+
>
245+
<tr><td>Goroutines</td><td>Count of live goroutines: <code>/sched/goroutines</code>.</td></tr>
246+
<tr
247+
><td>Events per second</td><td
248+
>Total number of goroutine scheduling events, from <code>/sched/latencies:seconds</code>,
249+
multiplied by 8.</td
250+
></tr
251+
>
252+
<tr
253+
><td>Events per second per P</td><td
254+
>Events per second divided by <code>GOMAXPROCS</code>:
255+
<code>/sched/gomaxprocs:threads</code>.</td
256+
></tr
257+
>
258+
<tr
259+
><td>OS stacks</td><td
260+
>Stack memory allocated by the OS: <code>/memory/classes/os-stacks</code>.</td
261+
></tr
262+
>
263+
<tr
264+
><td>Other</td><td
265+
>Memory for trace buffers, debugging structures, finalizers, and more: <code
266+
>/memory/classes/other</code
267+
>.</td
268+
></tr
269+
>
270+
<tr
271+
><td>Profiling buckets</td><td
272+
>Memory used by profiling stack trace hash map: <code
273+
>/memory/classes/profiling/buckets</code
274+
>.</td
275+
></tr
276+
>
277+
<tr
278+
><td>Total</td><td
279+
>Total memory mapped by the Go runtime: <code>/memory/classes/total</code>.</td
280+
></tr
281+
>
282+
<tr
283+
><td>Mark assist</td><td
284+
>CPU time goroutines spent helping the GC: <code>/cpu/classes/gc/mark/assist</code>.</td
285+
></tr
286+
>
287+
<tr
288+
><td>Mark dedicated</td><td
289+
>CPU time on dedicated processors for GC tasks: <code>/cpu/classes/gc/mark/dedicated</code
290+
>.</td
291+
></tr
292+
>
293+
<tr
294+
><td>Mark idle</td><td
295+
>CPU time for GC tasks using otherwise idle CPU: <code>/cpu/classes/gc/mark/idle</code
296+
>.</td
297+
></tr
298+
>
299+
<tr
300+
><td>Pause</td><td
301+
>Total CPU time the app was paused by the GC: <code>/cpu/classes/gc/pause</code>.</td
302+
></tr
303+
>
304+
<tr
305+
><td>GC total</td><td>Total CPU time spent doing GC: <code>/cpu/classes/gc/total</code>.</td
306+
></tr
307+
>
308+
<tr
309+
><td>Mutex wait</td><td
310+
>Time goroutines spent blocked on a <code>sync.Mutex</code> or <code>sync.RWMutex</code>:
311+
<code>/sync/mutex/wait/total</code>.</td
312+
></tr
313+
>
314+
<tr
315+
><td>Scannable globals</td><td
316+
>Amount of scannable global variable space: <code>/gc/scan/globals</code>.</td
317+
></tr
318+
>
319+
<tr
320+
><td>Scannable heap</td><td>Amount of scannable heap space: <code>/gc/scan/heap</code>.</td
321+
></tr
322+
>
323+
<tr
324+
><td>Scanned stack</td><td
325+
>Bytes of stack scanned during the last GC cycle: <code>/gc/scan/stack</code>.</td
326+
></tr
327+
>
328+
</tbody>
329+
</table>
330+
</PageLayout>

src/routes/(main)/blog/posts.ts

Lines changed: 6 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -58,5 +58,11 @@ export const posts: post[] = [
5858
title: 'Completed Ultimate Go Language Guide Course',
5959
description: `After completing the course, I deepened my understanding of writing efficient, idiomatic Go code. I explored Go internals, learned to balance readability and performance, and focused on package-oriented design for better maintainability and long-term success.`,
6060
date: '2025-04-07'
61+
},
62+
{
63+
id: '10',
64+
title: 'Exposing Go Internals: Setting Up Debug Endpoints',
65+
description: `In this post, we’ll walk through setting up debug endpoints to profile your Go application in real time. We'll also integrate Statsviz to provide a user-friendly GUI for visualizing metrics like heap usage and goroutine activity. To make things more interesting, we’ll simulate load by sending a burst of requests to a liveness endpoint, allowing us to see how the heap and goroutine count respond under pressure.`,
66+
date: '2025-04-15'
6167
}
6268
];

static/blog/10/goroutines.png

27 KB
Loading

static/blog/10/heap-details.png

37.3 KB
Loading

static/blog/10/heap-global.png

42 KB
Loading

static/blog/10/heap-graphs.png

186 KB
Loading

static/blog/10/statsviz.png

228 KB
Loading

0 commit comments

Comments
 (0)