-
Notifications
You must be signed in to change notification settings - Fork 4.1k
Description
Event loop busy-spins at 100% CPU when spawning concurrent bun processes using Ink + fetch (macOS ARM64)
What version of Bun is used?
1.3.10
What platform is your computer?
macOS 26.3 (25D125), Apple M4 Pro, 48 GB RAM, ARM64
What steps can reproduce the bug?
Spawn 20 concurrent bun child processes that each use Ink (React terminal renderer) and make a streaming HTTPS fetch. ~5-10% of processes permanently hang at 100% CPU.
Setup:
mkdir repro && cd repro
bun add ink reactchild-ink-minimal.js:
import React, { useState, useEffect } from "react";
import { render, Text, Box } from "ink";
let unmountApp;
function App() {
const [status, setStatus] = useState("starting");
useEffect(() => {
async function run() {
setStatus("fetching");
const res = await fetch("https://httpbin.org/stream/50");
const reader = res.body.getReader();
let total = 0;
while (true) {
const { done, value } = await reader.read();
if (done) break;
total += value.length;
}
setStatus("done");
process.stderr.write(`done: ${total} bytes\n`);
unmountApp();
process.exit(0);
}
run().catch((err) => {
process.stderr.write(`error: ${err.message}\n`);
unmountApp();
process.exit(1);
});
}, []);
return React.createElement(
Box,
null,
React.createElement(Text, null, `status=${status}`)
);
}
const { unmount } = render(React.createElement(App));
unmountApp = unmount;spawn-minimal.js:
import { spawn } from "child_process";
const NUM_CHILDREN = 20;
const RUNS = 5;
let currentRun = 0;
let totalStuck = 0;
function runBatch() {
currentRun++;
let completed = 0;
const startTime = Date.now();
const children = [];
console.log(`\n=== Run ${currentRun}/${RUNS} ===`);
for (let i = 0; i < NUM_CHILDREN; i++) {
const child = spawn("bun", ["child-ink-minimal.js"], {
stdio: ["pipe", "pipe", "pipe"],
cwd: import.meta.dir,
});
children.push({ i, child, done: false, pid: child.pid });
child.stdin.end();
child.stderr.on("data", (d) => {
const msg = d.toString().trim();
if (msg.startsWith("done:")) process.stdout.write(` [${i}] ${msg}\n`);
});
child.on("exit", (code) => {
completed++;
children.find(c => c.i === i).done = true;
if (completed === NUM_CHILDREN) {
console.log(` ${NUM_CHILDREN} completed in ${Date.now() - startTime}ms`);
if (currentRun < RUNS) runBatch();
else {
console.log(`\nTotal stuck: ${totalStuck} out of ${RUNS * NUM_CHILDREN}`);
process.exit(totalStuck > 0 ? 1 : 0);
}
}
});
}
setTimeout(() => {
const stuck = children.filter(c => !c.done);
if (stuck.length > 0) {
totalStuck += stuck.length;
console.log(` STUCK: ${stuck.length} children (${stuck.map(s => `pid=${s.pid}`).join(", ")})`);
for (const s of stuck) s.child.kill("SIGKILL");
}
}, 15000);
}
runBatch();Run:
bun spawn-minimal.jsWhat is the expected behavior?
All 100 processes (20 children x 5 runs) should complete their fetch requests, render via Ink, and exit.
What do you see instead?
5-10% of child processes permanently hang at 100% CPU. They never produce output and never exit. SIGTERM is ignored; only SIGKILL works.
Output:
=== Run 1/5 ===
[7] done: 13890 bytes
... (18 children complete normally)
20 completed in 2353ms
=== Run 3/5 ===
... (19 children complete normally)
STUCK: 1 children (pid=27906)
20 completed in 15016ms
=== Run 4/5 ===
... (18 children complete normally)
STUCK: 2 children (pid=28197, pid=28198)
20 completed in 15018ms
Total stuck: 5 out of 100
Key observation: Ink is required to trigger the bug
Without Ink (just fetch() alone, or fetch() + local TCP server + memory allocation), 0 out of 1000+ processes hang across 50 runs. Adding Ink's render() triggers the bug at ~5-10% rate.
Stack sampling (sample <pid>)
Stuck processes show the main thread spinning in kevent64 with zero-timeout polling:
835 Thread main
...
835 ??? (in bun.exe) + 0xf0f004
821 ??? (in bun.exe) + 0xf2f944
806 ??? (in bun.exe) + 0xf52b40
760 ??? (in bun.exe) + 0x122b18c
754 kevent64 (in libsystem_kernel.dylib) + 8,4
6 ??? + 0x122c248
5 DYLD-STUB$$os_unfair_lock_lock + 4
6 ??? + 0x122c258
4 DYLD-STUB$$os_unfair_lock_unlock + 4
Top of stack summary:
__ulock_wait2 11690
kevent64 1589
kevent 835
lsof on stuck processes
Some stuck processes have established outbound HTTPS connections (data is waiting to be read but never processed):
bun.exe FD 10 TCP 192.168.1.26:61543->216.150.1.193:443 (ESTABLISHED)
bun.exe FD 12 TCP 192.168.1.26:61545->216.150.1.193:443 (ESTABLISHED)
Others are stuck with only a listen socket (no outbound connections made yet):
bun.exe FD 83 TCP *:54627 (LISTEN)
Additional context
- 100% CPU: Stuck processes consume a full CPU core indefinitely (observed for 5+ hours).
- SIGTERM ignored: Stuck processes don't respond to SIGTERM; require SIGKILL.
- Race condition: Only 5-10% of children hang — timing/concurrency dependent.
- Node.js works fine: The same application logic under Node.js has zero hangs.
- Ink is the trigger: Without Ink, the bug does not reproduce even with 1000+ concurrent processes using fetch, Anthropic SDK streaming, local TCP servers, and 50MB memory allocation per process.
Possibly related issues
- Bun using 20x the CPU of Node for network requests. (Since 1.2.8) #26811 — kevent64 filter bug causing high CPU (fixed in 1.3.8 via fix(kqueue): fix incorrect filter comparison causing excessive CPU on macOS #26812, but this issue persists on 1.3.10)
- bmalloc SYSCALL macro spins at 100% CPU on madvise EAGAIN — zero-delay retry loop #27490 — bmalloc SYSCALL spins at 100% CPU on madvise under memory pressure
- Bun server stays at 100% CPU #24231 — Multiple bun processes cause 100% CPU spikes