Skip to content

[Bug]: Error when installing korifi on kind cluster run on podman #4206

@premroshan

Description

@premroshan

What happened?

When running the installation of korifi on podman environment i get the following error:

Found 3 pods, using pod/install-korifi-dbcwr

Creating 'cf-admin' user

runtime: lfstack.push invalid packing: node=0xffffbc13cec0 cnt=0x1 packed=0xffffbc13cec00001 -> node=0xffffffffbc13cec0
fatal error: lfstack.push

runtime stack:
runtime.throw({0x247babe?, 0x1bc960001?})
runtime/panic.go:1101 +0x48 fp=0xffffbc96eb10 sp=0xffffbc96eae0 pc=0x4780e8
runtime.(*lfstack).push(0x3dd47c0?, 0x6?)
runtime/lfstack.go:29 +0x125 fp=0xffffbc96eb50 sp=0xffffbc96eb10 pc=0x416b25
runtime.(*spanSetBlockAlloc).free(...)
runtime/mspanset.go:322
runtime.(*spanSet).reset(0x3deff00)
runtime/mspanset.go:264 +0x79 fp=0xffffbc96eb80 sp=0xffffbc96eb50 pc=0x43b459
runtime.finishsweep_m()
runtime/mgcsweep.go:256 +0x92 fp=0xffffbc96ebb8 sp=0xffffbc96eb80 pc=0x42c4b2
runtime.gcStart.func3()
runtime/mgc.go:734 +0xf fp=0xffffbc96ebc8 sp=0xffffbc96ebb8 pc=0x47352f
runtime.systemstack(0x4829bf)
runtime/asm_amd64.s:514 +0x4a fp=0xffffbc96ebd8 sp=0xffffbc96ebc8 pc=0x47e14a

goroutine 1 gp=0xc000002380 m=0 mp=0x3dd60c0 [running, locked to thread]:
runtime.systemstack_switch()
runtime/asm_amd64.s:479 +0x8 fp=0xc0002e9668 sp=0xc0002e9658 pc=0x47e0e8
runtime.gcStart({0xffffbc12e5c0?, 0x70056?, 0x2e97a8?})
runtime/mgc.go:733 +0x41c fp=0xc0002e9760 sp=0xc0002e9668 pc=0x42125c
runtime.mallocgcSmallScanHeader(0xffffbc12e5c0?, 0x218bc60, 0x88?)
runtime/malloc.go:1518 +0x2ff fp=0xc0002e97b8 sp=0xc0002e9760 pc=0x41945f
runtime.mallocgc(0xc80, 0x218bc60, 0x1)
runtime/malloc.go:1060 +0xa5 fp=0xc0002e97e8 sp=0xc0002e97b8 pc=0x475945
runtime.newarray(0xc0002e9838?, 0x400000000476253?)
runtime/malloc.go:1776 +0x45 fp=0xc0002e9810 sp=0xc0002e97e8 pc=0x475b05
internal/runtime/maps.newarray(0x20?, 0x22e41a0?)
runtime/malloc.go:1800 +0x13 fp=0xc0002e9830 sp=0xc0002e9810 pc=0x475bb3
internal/runtime/maps.newGroups(...)
internal/runtime/maps/group.go:310
internal/runtime/maps.(*table).reset(0xc000472920, 0x2132c60, 0x80)
internal/runtime/maps/table.go:104 +0x3d fp=0xc0002e9858 sp=0xc0002e9830 pc=0x408dfd
internal/runtime/maps.newTable(0x2132c60, 0x80, 0x0, 0x0)
internal/runtime/maps/table.go:95 +0x8f fp=0xc0002e9888 sp=0xc0002e9858 pc=0x408d4f
internal/runtime/maps.(*table).grow(0xc0001b2960, 0x2132c60, 0xc0002e9bd8, 0x7?)
internal/runtime/maps/table.go:1092 +0x37 fp=0xc0002e98f0 sp=0xc0002e9888 pc=0x40a8b7
internal/runtime/maps.(*table).rehash(0xc0000a08b0?, 0xc0000a0838?, 0xc0000a08b8?)
internal/runtime/maps/table.go:1027 +0x25 fp=0xc0002e9920 sp=0xc0002e98f0 pc=0x40a625
runtime.mapassign_faststr(0x2132c60, 0xc0002e9bd8, {0xc00036f7e8, 0x15})
internal/runtime/maps/runtime_faststr_swiss.go:382 +0x197 fp=0xc0002e99d8 sp=0xc0002e9920 pc=0x40c8b7
github.com/prometheus/client_golang/prometheus.(*Registry).Register(0xc0004670e0, {0x296dd10, 0xc000578000})
github.com/prometheus/[email protected]/prometheus/registry.go:327 +0x41e fp=0xc0002e9d00 sp=0xc0002e99d8 pc=0x8cde5e
github.com/prometheus/client_golang/prometheus.(*Registry).MustRegister(0xc0004670e0, {0xc0003db7c0?, 0x475939?, 0x10?})
github.com/prometheus/[email protected]/prometheus/registry.go:405 +0x4e fp=0xc0002e9d38 sp=0xc0002e9d00 pc=0x8cea0e
k8s.io/component-base/metrics.(*kubeRegistry).RawMustRegister(0xc0001b6620, {0xc0003db7c0, 0x1, 0xc000118701?})
k8s.io/component-base/metrics/registry.go:260 +0x2a fp=0xc0002e9d78 sp=0xc0002e9d38 pc=0x93ac8a
k8s.io/component-base/metrics.KubeRegistry.RawMustRegister-fm({0xc0003db7c0?, 0x1?, 0x2112820?})
:1 +0x36 fp=0xc0002e9da8 sp=0xc0002e9d78 pc=0x940e56
k8s.io/component-base/metrics/legacyregistry.init.0()
k8s.io/component-base/metrics/legacyregistry/registry.go:55 +0x16e fp=0xc0002e9e28 sp=0xc0002e9da8 pc=0x940bce
runtime.doInit1(0x3cccbe0)
runtime/proc.go:7410 +0xd8 fp=0xc0002e9f50 sp=0xc0002e9e28 pc=0x453218
runtime.doInit(...)
runtime/proc.go:7377
runtime.main()
runtime/proc.go:254 +0x345 fp=0xc0002e9fe0 sp=0xc0002e9f50 pc=0x444645
runtime.goexit({})
runtime/asm_amd64.s:1700 +0x1 fp=0xc0002e9fe8 sp=0xc0002e9fe0 pc=0x47ff81

goroutine 2 gp=0xc0000028c0 m=nil [force gc (idle)]:
runtime.gopark(0x0?, 0x0?, 0x0?, 0x0?, 0x0?)
runtime/proc.go:435 +0xce fp=0xc000078fa8 sp=0xc000078f88 pc=0x47820e
runtime.goparkunlock(...)
runtime/proc.go:441
runtime.forcegchelper()

What you expected to happen

For Korifi to be successfully installed. It works on the docker desktop environment

Acceptance Criteria

GIVEN a system with a kind cluster running podman for container runtime
WHEN I i install korifi using the script at kubectl apply -f https://github.com/cloudfoundry/korifi/releases/latest/download/install-korifi-kind.yaml
THEN I should get a successfully running cf setup in my arm64 macOS system

How to reproduce it (as minimally and precisely as possible)

Install kind cluster using podman on an apple silicon mac.

Install korifi using the procedure described here https://github.com/cloudfoundry/korifi/blob/main/INSTALL.kind.md

Anything else we need to know?

No response

Environment

Revision of codebase:
Kubernetes version (use kubectl version):
Client Version: v1.34.1
Kustomize Version: v5.7.1
Server Version: v1.34.0
Go version:
go version go1.25.3 darwin/arm64
kind version:
kind v0.30.0 go1.25.1 darwin/arm64
apple silicon mac m3 pro, memory 18 GB and macOS Tahoe 26.0.1

Metadata

Metadata

Assignees

No one assigned

    Labels

    bugSomething isn't working

    Type

    No type

    Projects

    Status

    ✅ Done

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions