Skip to content

ops-pod script doesn't provide correct environment when chrooted #218

@mimiteto

Description

@mimiteto

What happened:
Starting a pod with:

╭─[cluster|default]user in ~   25-10-02 - 12:16:41
╰─○ bash ~/ops-toolbelt/hacks/ops-pod -c -o -i $KUBECTL_NODE_SHELL_IMAGE ip-10-180-12-49.eu-west-1.compute.internal
Node name provided ...
Deploying ops pod on ip-10-180-12-49.eu-west-1.compute.internal

pod/ops-pod-user created
Waiting for pod to be running...
Waiting for pod to be running...
Waiting for pod to be running...
Waiting for pod to be running...
Waiting for pod to be running...
Waiting for pod to be running...
Waiting for pod to be running...
Waiting for pod to be running...

BE CAREFUL!!! Node root directory mounted under /


root at ip-10-180-12-49.eu-west-1.compute.internal in /
$ pwru
bash: pwru: command not found

root at ip-10-180-12-49.eu-west-1.compute.internal in /
$ etcdct
bash: etcdct: command not found

root at ip-10-180-12-49.eu-west-1.compute.internal in /
$ etcdctl
bash: etcdctl: command not found

root at ip-10-180-12-49.eu-west-1.compute.internal in /
$
exit
command terminated with exit code 127
pod "ops-pod-user" deleted

What you expected to happen:
pwru and etcdctl to be available for me.

How to reproduce it (as minimally and precisely as possible):
Start an ops pod with hostnetwork as described above.

Anything else we need to know:

Environment:

Metadata

Metadata

Assignees

No one assigned

    Labels

    kind/bugBuglifecycle/staleDenotes an issue or PR has remained open with no activity and has become stale.

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions