seaweedfs: add iamguarded subpkg #77121
Merged
Chainguard Internal / elastic-build
succeeded
Jan 3, 2026 in 5m 30s
APKs built successfully
Build ID: ee40ce1c-c4c2-46e6-8259-a7b3e90f317a
Details
builds
x86_64 Logs
Click to expand
enerating SSH host key for VM
qemu: starting VM
qemu: waiting for SSH
conn read: read tcp 127.0.0.1:36180->127.0.0.1:38175: i/o timeout
qemu: meta-data=/dev/vda isize=512 agcount=8, agsize=1638400 blks
qemu: = sectsz=4096 attr=2, projid32bit=1
qemu: = crc=1 finobt=1, sparse=1, rmapbt=1
qemu: = reflink=1 bigtime=1 inobtcount=1 nrext64=1
qemu: = exchange=0 metadir=0
qemu: data = bsize=4096 blocks=13107200, imaxpct=25
qemu: = sunit=0 swidth=0 blks
qemu: naming =version 2 bsize=4096 ascii-ci=0, ftype=1, parent=0
qemu: log =internal log bsize=4096 blocks=16384, version=2
qemu: = sectsz=4096 sunit=1 blks, lazy-count=1
qemu: realtime =none extsz=4096 blocks=0, rtextents=0
qemu: = rgcount=0 rgsize=0 extents
qemu: = zoned=0 start=0 reserved=0
qemu: Discarding blocks...Done.
qemu: [INIT] Checking for init.d scripts...
qemu: [INIT] No /opt/melange/init.d directory (optional, skipping)
qemu: ssh-keygen: generating new host keys: RSA ECDSA
qemu: Server listening on 0.0.0.0 port 2223.
qemu: Server listening on 0.0.0.0 port 22.
conn read: read tcp 127.0.0.1:36194->127.0.0.1:38175: i/o timeout
qemu: VM started successfully, SSH server is up
qemu: Connection closed by 10.0.2.2 port 36208
qemu: verifying VM host key against pre-provisioned key
qemu: Accepted publickey for root from 10.0.2.2 port 36222 ssh2: ECDSA SHA256:H9y3HQovxZgXLwoxLRHEUX+sV6MuLSbmKdw0l9s0tIw
qemu: VM host key successfully verified against pre-provisioned key
qemu: Connection closed by 10.0.2.2 port 36222
qemu: Accepted publickey for root from 10.0.2.2 port 36230 ssh2: ECDSA SHA256:H9y3HQovxZgXLwoxLRHEUX+sV6MuLSbmKdw0l9s0tIw
qemu: Accepted publickey for root from 10.0.2.2 port 37246 ssh2: ECDSA SHA256:H9y3HQovxZgXLwoxLRHEUX+sV6MuLSbmKdw0l9s0tIw
qemu: Accepted publickey for root from 10.0.2.2 port 36238 ssh2: ECDSA SHA256:H9y3HQovxZgXLwoxLRHEUX+sV6MuLSbmKdw0l9s0tIw
qemu: running kernel version: 6.16.10-r2-qemu-generic #Chainguard SMP PREEMPT_DYNAMIC Fri Oct 3 22:31:32 UTC 2025
qemu: setting up local workspace
qemu: unmounting host workspace from guest
SeaweedFS: store billions of files and serve them fast!
admin start SeaweedFS web admin interface
filer.backup resume-able continuously replicate files from a SeaweedFS cluster to another location defined in replication.toml
filer.sync resumable continuous synchronization between two active-active or active-passive SeaweedFS clusters
mini start a complete SeaweedFS setup optimized for S3 beginners and small/dev use cases
sql advanced SQL query interface for SeaweedFS MQ topics with multiple execution modes
version print SeaweedFS version
sftp start an SFTP server that is backed by a SeaweedFS filer
qemu: sending shutdown signal
running test pipeline for subpackage seaweedfs-iamguarded-compat
melange devel with runner qemu is testing:
image configuration:
contents:
build repositories: []
runtime repositories: []
repositories: []
keyring: []
packages: [bash busybox coreutils curl findutils git gnutar grep jq seaweedfs seaweedfs-iamguarded-compat sed symlink-check wait-for-it]
accounts:
runas:
users:
- uid=1000(build) gid=1000
groups:
- gid=1000(build) members=[build]
installing wolfi-baselayout (20230201-r24)
installing ca-certificates-bundle (20251003-r0)
installing ncurses-terminfo-base (6.6_p20251230-r0)
installing libgcc (15.2.0-r6)
installing glibc-locale-posix (2.42-r4)
installing glibc (2.42-r4)
installing ld-linux (2.42-r4)
installing ncurses (6.6_p20251230-r0)
installing bash (5.3-r3)
installing libxcrypt (4.5.2-r0)
installing libcrypt1 (2.42-r4)
installing busybox (1.37.0-r50)
installing libacl1 (2.3.2-r54)
installing libattr1 (2.5.2-r54)
installing libpcre2-8-0 (10.47-r0)
installing libsepol (3.9-r1)
installing libselinux (3.9-r1)
installing libcrypto3 (3.6.0-r6)
installing coreutils (9.9-r1)
installing libunistring (1.4.1-r1)
installing libidn2 (2.3.8-r3)
installing libpsl (0.21.5-r6)
installing nghttp3 (1.14.0-r0)
installing zlib (1.3.1-r51)
installing libbrotlicommon1 (1.2.0-r1)
installing libbrotlidec1 (1.2.0-r1)
installing libnghttp2-14 (1.68.0-r0)
installing readline (8.3-r1)
installing sqlite-libs (3.51.1-r0)
installing heimdal-libs (7.8.0-r42)
installing gdbm (1.26-r1)
installing cyrus-sasl (2.1.28-r45)
installing libssl3 (3.6.0-r6)
installing libldap-2.6 (2.6.10-r7)
installing libverto (0.3.2-r6)
installing krb5-conf (1.0-r7)
installing libcom_err (1.47.3-r1)
installing keyutils-libs (1.6.3-r37)
installing krb5-libs (1.22.1-r1)
installing libcurl-openssl4 (8.17.0-r0)
installing curl (8.17.0-r0)
installing findutils (4.10.0-r5)
installing libexpat1 (2.7.3-r0)
installing git (2.52.0-r0)
installing gnutar-rmt (1.35-r7)
installing gnutar (1.35-r7)
installing grep (3.12-r4)
installing oniguruma (6.9.10-r1)
installing jq (1.8.1-r3)
installing fuse2 (2.9.9-r53)
installing seaweedfs (4.05-r1)
installing libproc-2-0 (4.0.5-r42)
installing procps (4.0.5-r42)
installing seaweedfs-iamguarded-compat (4.05-r1)
installing sed (4.9-r7)
installing symlink-check (0.0.35-r1)
installing wait-for-it (0.20200823-r7)
installing wolfi-keys (1-r12)
installing apk-tools (2.14.10-r9)
installing wolfi-base (1-r7)
qemu: generating ssh key pairs for ephemeral VM
qemu: generating SSH host key for VM
qemu: starting VM
qemu: waiting for SSH
conn read: read tcp 127.0.0.1:55100->127.0.0.1:42695: i/o timeout
qemu: meta-data=/dev/vda isize=512 agcount=8, agsize=1638400 blks
qemu: = sectsz=4096 attr=2, projid32bit=1
qemu: = crc=1 finobt=1, sparse=1, rmapbt=1
qemu: = reflink=1 bigtime=1 inobtcount=1 nrext64=1
qemu: = exchange=0 metadir=0
qemu: data = bsize=4096 blocks=13107200, imaxpct=25
qemu: = sunit=0 swidth=0 blks
qemu: naming =version 2 bsize=4096 ascii-ci=0, ftype=1, parent=0
qemu: log =internal log bsize=4096 blocks=16384, version=2
qemu: = sectsz=4096 sunit=1 blks, lazy-count=1
qemu: realtime =none extsz=4096 blocks=0, rtextents=0
qemu: = rgcount=0 rgsize=0 extents
qemu: = zoned=0 start=0 reserved=0
qemu: Discarding blocks...Done.
qemu: [INIT] Checking for init.d scripts...
qemu: [INIT] No /opt/melange/init.d directory (optional, skipping)
qemu: ssh-keygen: generating new host keys: RSA ECDSA
qemu: Server listening on 0.0.0.0 port 2223.
qemu: Server listening on 0.0.0.0 port 22.
conn read: read tcp 127.0.0.1:55112->127.0.0.1:42695: i/o timeout
qemu: VM started successfully, SSH server is up
qemu: Connection closed by 10.0.2.2 port 55114
qemu: verifying VM host key against pre-provisioned key
qemu: Accepted publickey for root from 10.0.2.2 port 55118 ssh2: ECDSA SHA256:5ThK+V9q0e8gaVMMDLEXbTfkqZmCXKX/aC0q4UTzanc
qemu: VM host key successfully verified against pre-provisioned key
qemu: Connection closed by 10.0.2.2 port 55118
qemu: Accepted publickey for root from 10.0.2.2 port 55130 ssh2: ECDSA SHA256:5ThK+V9q0e8gaVMMDLEXbTfkqZmCXKX/aC0q4UTzanc
qemu: Accepted publickey for root from 10.0.2.2 port 55374 ssh2: ECDSA SHA256:5ThK+V9q0e8gaVMMDLEXbTfkqZmCXKX/aC0q4UTzanc
qemu: Accepted publickey for root from 10.0.2.2 port 55138 ssh2: ECDSA SHA256:5ThK+V9q0e8gaVMMDLEXbTfkqZmCXKX/aC0q4UTzanc
qemu: running kernel version: 6.16.10-r2-qemu-generic #Chainguard SMP PREEMPT_DYNAMIC Fri Oct 3 22:31:32 UTC 2025
qemu: setting up local workspace
qemu: unmounting host workspace from guest
running step "iamguarded/test-compat"
running step "auth/github"
Using OctoSTS to get a token for chainguard-dev/iamguarded-tools as elastic-build
running step "git-checkout"
[git checkout] repo='https://github.com/chainguard-dev/iamguarded-tools' dest='./.iamguarded-tools' depth='unset' branch='main' tag='' expcommit='' recurse='false' sparse_paths=''
[git checkout] Warning: no expected-commit
[git checkout] execute: git config --global --add safe.directory /tmp/tmp.vxGP1FRvXk
[git checkout] execute: git config --global --add safe.directory /home/build/.iamguarded-tools
[git checkout] execute: git clone --quiet --origin=origin --config=user.name=Melange Build --config=user.email=melange-build@cgr.dev --config=advice.detachedHead=false --branch=main --depth=1 https://github.com/chainguard-dev/iamguarded-tools /tmp/tmp.vxGP1FRvXk
[git checkout] execute: cd /tmp/tmp.vxGP1FRvXk
[git checkout] tar -c . | tar -C "/home/build/.iamguarded-tools" -x
[git checkout] execute: cd /home/build/.iamguarded-tools
[git checkout] execute: git config --global --add safe.directory /home/build/.iamguarded-tools
[git checkout] tip of main is commit 27e211d854a3c4b2875eb6b7a49354ddf95c6189
[git checkout] Setting commit date to Jan 1, 1980 (SOURCE_DATE_EPOCH found )
INFO: test iamguarded words: passed (no unexpected issues)
running step "test/tw/symlink-check"
running step "check for broken/dangling symlinks"
PASS[symlink-check]: /opt/iamguarded/seaweedfs/bin/weed -> /usr/bin/weed
INFO[symlink-check]: Tested [1] symlinks with [symlink-check]. [1/1] passed.
running step "Test master server startup and API"
running step "start daemon on localhost"
daemon started as pid 951 with: /opt/iamguarded/seaweedfs/bin/weed master -ip=0.0.0.0 -port=9333 -mdir=/tmp/seaweedfs-test
looking for 3 lines in output within 45 seconds
found within 2 seconds: Start Seaweed Master
found within 20 seconds: No existing leader found!
found within 20 seconds: Initializing new cluster
running post from /tmp/tmp.MShrYpTidR/post
wait-for-it: waiting 30 seconds for localhost:9333
wait-for-it: localhost:9333 is available after 0 seconds
{"IsLeader":true,"Leader":"0.0.0.0:9333"}
aarch64 Logs
Click to expand
3} from [::1]:50396
> I0103 10:28:41.008248 master_grpc_server_volume.go:145 volume grow &{Option:{"replication":{},"ttl":{"Count":0,"Unit":0},"version":3} Count:0 Force:false Reason:http assign}
> I0103 10:28:41.008271 node.go:192 topo failed to pick 1 from 0 node candidates
> I0103 10:28:41.008276 volume_growth.go:136 create 7 volume, created 0: Not enough data nodes found!
> I0103 10:28:41.208514 master_server_handlers.go:146 dirAssign volume growth {"replication":{},"ttl":{"Count":0,"Unit":0},"version":3} from [::1]:50396
> I0103 10:28:41.208528 master_grpc_server_volume.go:145 volume grow &{Option:{"replication":{},"ttl":{"Count":0,"Unit":0},"version":3} Count:0 Force:false Reason:http assign}
> I0103 10:28:41.208538 node.go:192 topo failed to pick 1 from 0 node candidates
> I0103 10:28:41.208541 volume_growth.go:136 create 7 volume, created 0: Not enough data nodes found!
> I0103 10:28:41.408812 master_server_handlers.go:146 dirAssign volume growth {"replication":{},"ttl":{"Count":0,"Unit":0},"version":3} from [::1]:50396
> I0103 10:28:41.408824 master_grpc_server_volume.go:145 volume grow &{Option:{"replication":{},"ttl":{"Count":0,"Unit":0},"version":3} Count:0 Force:false Reason:http assign}
> I0103 10:28:41.408839 node.go:192 topo failed to pick 1 from 0 node candidates
> I0103 10:28:41.408842 volume_growth.go:136 create 7 volume, created 0: Not enough data nodes found!
> I0103 10:28:41.609101 master_server_handlers.go:146 dirAssign volume growth {"replication":{},"ttl":{"Count":0,"Unit":0},"version":3} from [::1]:50396
> I0103 10:28:41.609125 master_grpc_server_volume.go:145 volume grow &{Option:{"replication":{},"ttl":{"Count":0,"Unit":0},"version":3} Count:0 Force:false Reason:http assign}
> I0103 10:28:41.609144 node.go:192 topo failed to pick 1 from 0 node candidates
> I0103 10:28:41.609148 volume_growth.go:136 create 7 volume, created 0: Not enough data nodes found!
> I0103 10:28:41.809396 master_server_handlers.go:146 dirAssign volume growth {"replication":{},"ttl":{"Count":0,"Unit":0},"version":3} from [::1]:50396
> I0103 10:28:41.809411 master_grpc_server_volume.go:145 volume grow &{Option:{"replication":{},"ttl":{"Count":0,"Unit":0},"version":3} Count:0 Force:false Reason:http assign}
> I0103 10:28:41.809424 node.go:192 topo failed to pick 1 from 0 node candidates
> I0103 10:28:41.809427 volume_growth.go:136 create 7 volume, created 0: Not enough data nodes found!
> I0103 10:28:42.009698 master_server_handlers.go:146 dirAssign volume growth {"replication":{},"ttl":{"Count":0,"Unit":0},"version":3} from [::1]:50396
> I0103 10:28:42.009736 master_grpc_server_volume.go:145 volume grow &{Option:{"replication":{},"ttl":{"Count":0,"Unit":0},"version":3} Count:0 Force:false Reason:http assign}
> I0103 10:28:42.009760 node.go:192 topo failed to pick 1 from 0 node candidates
> I0103 10:28:42.009765 volume_growth.go:136 create 7 volume, created 0: Not enough data nodes found!
> I0103 10:28:42.210014 master_server_handlers.go:146 dirAssign volume growth {"replication":{},"ttl":{"Count":0,"Unit":0},"version":3} from [::1]:50396
> I0103 10:28:42.210032 master_grpc_server_volume.go:145 volume grow &{Option:{"replication":{},"ttl":{"Count":0,"Unit":0},"version":3} Count:0 Force:false Reason:http assign}
> I0103 10:28:42.210057 node.go:192 topo failed to pick 1 from 0 node candidates
> I0103 10:28:42.210061 volume_growth.go:136 create 7 volume, created 0: Not enough data nodes found!
> I0103 10:28:42.410337 master_server_handlers.go:146 dirAssign volume growth {"replication":{},"ttl":{"Count":0,"Unit":0},"version":3} from [::1]:50396
> I0103 10:28:42.410372 master_grpc_server_volume.go:145 volume grow &{Option:{"replication":{},"ttl":{"Count":0,"Unit":0},"version":3} Count:0 Force:false Reason:http assign}
> I0103 10:28:42.410402 node.go:192 topo failed to pick 1 from 0 node candidates
> I0103 10:28:42.410407 volume_growth.go:136 create 7 volume, created 0: Not enough data nodes found!
> I0103 10:28:42.610683 master_server_handlers.go:146 dirAssign volume growth {"replication":{},"ttl":{"Count":0,"Unit":0},"version":3} from [::1]:50396
> I0103 10:28:42.610723 master_grpc_server_volume.go:145 volume grow &{Option:{"replication":{},"ttl":{"Count":0,"Unit":0},"version":3} Count:0 Force:false Reason:http assign}
> I0103 10:28:42.610755 node.go:192 topo failed to pick 1 from 0 node candidates
> I0103 10:28:42.610760 volume_growth.go:136 create 7 volume, created 0: Not enough data nodes found!
> I0103 10:28:42.811035 master_server_handlers.go:146 dirAssign volume growth {"replication":{},"ttl":{"Count":0,"Unit":0},"version":3} from [::1]:50396
> I0103 10:28:42.811081 master_grpc_server_volume.go:145 volume grow &{Option:{"replication":{},"ttl":{"Count":0,"Unit":0},"version":3} Count:0 Force:false Reason:http assign}
> I0103 10:28:42.811119 node.go:192 topo failed to pick 1 from 0 node candidates
> I0103 10:28:42.811125 volume_growth.go:136 create 7 volume, created 0: Not enough data nodes found!
> I0103 10:28:43.011371 master_server_handlers.go:146 dirAssign volume growth {"replication":{},"ttl":{"Count":0,"Unit":0},"version":3} from [::1]:50396
> I0103 10:28:43.011403 master_grpc_server_volume.go:145 volume grow &{Option:{"replication":{},"ttl":{"Count":0,"Unit":0},"version":3} Count:0 Force:false Reason:http assign}
> I0103 10:28:43.011428 node.go:192 topo failed to pick 1 from 0 node candidates
> I0103 10:28:43.011433 volume_growth.go:136 create 7 volume, created 0: Not enough data nodes found!
> I0103 10:28:43.211680 master_server_handlers.go:146 dirAssign volume growth {"replication":{},"ttl":{"Count":0,"Unit":0},"version":3} from [::1]:50396
> I0103 10:28:43.211699 master_grpc_server_volume.go:145 volume grow &{Option:{"replication":{},"ttl":{"Count":0,"Unit":0},"version":3} Count:0 Force:false Reason:http assign}
> I0103 10:28:43.211729 node.go:192 topo failed to pick 1 from 0 node candidates
> I0103 10:28:43.211733 volume_growth.go:136 create 7 volume, created 0: Not enough data nodes found!
> I0103 10:28:43.411996 master_server_handlers.go:146 dirAssign volume growth {"replication":{},"ttl":{"Count":0,"Unit":0},"version":3} from [::1]:50396
> I0103 10:28:43.412041 master_grpc_server_volume.go:145 volume grow &{Option:{"replication":{},"ttl":{"Count":0,"Unit":0},"version":3} Count:0 Force:false Reason:http assign}
> I0103 10:28:43.412066 node.go:192 topo failed to pick 1 from 0 node candidates
> I0103 10:28:43.412070 volume_growth.go:136 create 7 volume, created 0: Not enough data nodes found!
> I0103 10:28:43.612314 master_server_handlers.go:146 dirAssign volume growth {"replication":{},"ttl":{"Count":0,"Unit":0},"version":3} from [::1]:50396
> I0103 10:28:43.612336 master_grpc_server_volume.go:145 volume grow &{Option:{"replication":{},"ttl":{"Count":0,"Unit":0},"version":3} Count:0 Force:false Reason:http assign}
> I0103 10:28:43.612356 node.go:192 topo failed to pick 1 from 0 node candidates
> I0103 10:28:43.612359 volume_growth.go:136 create 7 volume, created 0: Not enough data nodes found!
> I0103 10:28:43.812616 master_server_handlers.go:146 dirAssign volume growth {"replication":{},"ttl":{"Count":0,"Unit":0},"version":3} from [::1]:50396
> I0103 10:28:43.812641 master_grpc_server_volume.go:145 volume grow &{Option:{"replication":{},"ttl":{"Count":0,"Unit":0},"version":3} Count:0 Force:false Reason:http assign}
> I0103 10:28:43.812673 node.go:192 topo failed to pick 1 from 0 node candidates
> I0103 10:28:43.812678 volume_growth.go:136 create 7 volume, created 0: Not enough data nodes found!
> I0103 10:28:44.012924 master_server_handlers.go:146 dirAssign volume growth {"replication":{},"ttl":{"Count":0,"Unit":0},"version":3} from [::1]:50396
> I0103 10:28:44.012954 master_grpc_server_volume.go:145 volume grow &{Option:{"replication":{},"ttl":{"Count":0,"Unit":0},"version":3} Count:0 Force:false Reason:http assign}
> I0103 10:28:44.012975 node.go:192 topo failed to pick 1 from 0 node candidates
> I0103 10:28:44.012979 volume_growth.go:136 create 7 volume, created 0: Not enough data nodes found!
> I0103 10:28:44.213252 master_server_handlers.go:146 dirAssign volume growth {"replication":{},"ttl":{"Count":0,"Unit":0},"version":3} from [::1]:50396
> I0103 10:28:44.213273 master_grpc_server_volume.go:145 volume grow &{Option:{"replication":{},"ttl":{"Count":0,"Unit":0},"version":3} Count:0 Force:false Reason:http assign}
> I0103 10:28:44.213293 node.go:192 topo failed to pick 1 from 0 node candidates
> I0103 10:28:44.213296 volume_growth.go:136 create 7 volume, created 0: Not enough data nodes found!
> I0103 10:28:44.413555 master_server_handlers.go:146 dirAssign volume growth {"replication":{},"ttl":{"Count":0,"Unit":0},"version":3} from [::1]:50396
> I0103 10:28:44.413577 master_grpc_server_volume.go:145 volume grow &{Option:{"replication":{},"ttl":{"Count":0,"Unit":0},"version":3} Count:0 Force:false Reason:http assign}
> I0103 10:28:44.413604 node.go:192 topo failed to pick 1 from 0 node candidates
> I0103 10:28:44.413607 volume_growth.go:136 create 7 volume, created 0: Not enough data nodes found!
> I0103 10:28:44.613937 master_server_handlers.go:146 dirAssign volume growth {"replication":{},"ttl":{"Count":0,"Unit":0},"version":3} from [::1]:50396
> I0103 10:28:44.613964 master_grpc_server_volume.go:145 volume grow &{Option:{"replication":{},"ttl":{"Count":0,"Unit":0},"version":3} Count:0 Force:false Reason:http assign}
> I0103 10:28:44.613988 node.go:192 topo failed to pick 1 from 0 node candidates
> I0103 10:28:44.613992 volume_growth.go:136 create 7 volume, created 0: Not enough data nodes found!
> I0103 10:28:44.814328 common.go:81 response method:GET URL:/dir/assign with httpStatus:406 and JSON:{"error":"failed to find writable volumes for collection: replication:000 ttl: error: No writable volumes and no free volumes left for {\"replication\":{},\"ttl\":{\"Count\":0,\"Unit\":0},\"version\":3}"}
-- end output --
found 3 of expected 3 lines in output.
found 0 / 9 error strings in output.
twk: SIGTERM sent to pid 155. kill returned 0.
twk: pid 155 exited within 2 seconds after SIGTERM
pod 4d629b4dab29cadf3342ffcfa464646bf48116afd9c019edd3df3727a544de47 terminated
tests completed successfully
all tests passed
Indexes
https://apk.cgr.dev/wolfi-presubmit/a7e7aa0ed0fa448c1abee7b45d356f30777ef154
Packages
- ✅ seaweedfs (success | 1m58s | x86_64 logs | aarch64 logs)
Tests
- ✅ seaweedfs (success | 1m26s | x86_64 logs | aarch64 logs)
More Observability
Command
cg build log \
--build-id ee40ce1c-c4c2-46e6-8259-a7b3e90f317a \
--project prod-wolfi-os \
--cluster elastic-pre-a \
--namespace pre-wolfi \
--start 2026-01-03T10:23:31Z \
--end 2026-01-03T10:39:01Z
Loading