Skip to content

Conversation

@ihor-hrytskiv
Copy link
Contributor

@ihor-hrytskiv ihor-hrytskiv commented Oct 14, 2024

This is a proposal to solve the issue #381. If you approve this design, or if we choose another, I will write tests to cover the code.

Sidecar containers are a new type of containers that start among the Init containers, run through the lifecycle of the Pod and don’t block pod termination. Kubelet makes a best effort to keep them alive and running while other containers are running.

  • Tests pass
  • Appropriate changes to README are included in PR

@ihor-hrytskiv ihor-hrytskiv requested a review from a team as a code owner October 14, 2024 08:25
@jackwotherspoon
Copy link
Collaborator

Thanks @ihor-hrytskiv for the PR and contribution! 😄

@hessjcg is OOO this week, but will review this first thing next week when he is back

@hessjcg
Copy link
Collaborator

hessjcg commented Oct 28, 2024

Hi, nice work, @ihor-hrytskiv. Thank you for contributing.

I would like to suggest a bigger change: Instead of creating a new option in the custom resource for sidecar container, the operator should check the Kubernetes API version. The operator always use InitContainer sidecars if they are supported.

I will review the code in more detail over the next few days. I think there may an issue with the logic to maintain the list of ports in use. I'll more review comments soon.

-Jonathan

@ihor-hrytskiv
Copy link
Contributor Author

Hi, nice work, @ihor-hrytskiv. Thank you for contributing.

I would like to suggest a bigger change: Instead of creating a new option in the custom resource for sidecar container, the operator should check the Kubernetes API version. The operator always use InitContainer sidecars if they are supported.

I will review the code in more detail over the next few days. I think there may an issue with the logic to maintain the list of ports in use. I'll more review comments soon.

-Jonathan

Hi @hessjcg, thanks, I'll try to add this functionality as soon as possible

@ihor-hrytskiv ihor-hrytskiv force-pushed the feat/run-as-initcontainer branch from f78e9dd to 99e4680 Compare November 4, 2024 16:05
@ihor-hrytskiv
Copy link
Contributor Author

Hi @hessjcg, take a look please

@hessjcg hessjcg added the tests: run Run all the tests for this PR label Nov 21, 2024
Copy link
Collaborator

@hessjcg hessjcg left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nice work. I pushed a commit that updates the integration tests so that they run on the k8s versions 1.28 and latest. I'm going to work on running the end-to-end tests now.

Makefile Outdated

KUSTOMIZE_VERSION=v4.5.2# don't manage with renovate, this repo has non-standard tags

ENVTEST_K8S_VERSION=1.28.x
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Instead of hardcoding the envtest k8s version here, let's add new make target go_test_k8s_1_28 which will run the integration tests using K8s 1.28.x.

nonAuthProxyContainers = append(nonAuthProxyContainers, containers[i])
}
}
allContainers := append(podSpec.Containers, podSpec.InitContainers...)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good cleanup. Thank you.

@jackwotherspoon
Copy link
Collaborator

jackwotherspoon commented Nov 22, 2024

Nice work. I pushed a commit that updates the integration tests so that they run on the k8s versions 1.28 and latest. I'm going to work on running the end-to-end tests now.

@hessjcg We no longer run integration tests on forks due to the security issues with the pull_request_target GitHub actions trigger we used to use.

So there are two options here:

  1. Merge the PR and monitor integrations tests on the continuous build on main closely, revert or send follow up PR to fix tests if the builds fails.
  2. Copy the forked branch to your own internal branch and have the tests run on it and then merge this PR and delete your copy branch,

@hessjcg
Copy link
Collaborator

hessjcg commented Nov 22, 2024

@jackwotherspoon, Good point. I'm going to run the e2e tests in my dev environment, and if they pass, then I will merge and monitor the e2e tests on main. I would like to build a way to run e2e tests on PRs again, but that will have to wait.

@hessjcg
Copy link
Collaborator

hessjcg commented Nov 22, 2024

E2E tests passed: E2E Test Github Action. I'm going to approve and merge this PR.

@hessjcg hessjcg merged commit 19d8043 into GoogleCloudPlatform:main Nov 22, 2024
12 checks passed
@utmaks
Copy link

utmaks commented Nov 26, 2024

After upgrading to version 1.6.0, I got a lot of runtime validation errors on startup (Startup probe failed: Get "http://10.210.0.89:15020/app-health/csql-apps-apps/startupz": dial tcp 10.210.0.89:15020: connect: connection refused) and can't get back the previous behavior (I need version 1.6.0 because of the minSigtermDelay parameter). At the same time, the permissions of the service accounts were not changed in any way. Old versions of pods work in the database. Is everything exactly working here?

I'm using istio and maybe this is the case, since the deployment is modified several times. Especially since csql tries to start the first istio-init container, before istio-init. The new approach doesn't work like that right away.

UPD.

My hypothesis turned out to be correct. After disabling Istio, the proxy container was up properly. I think we should let the user choose the mode of operation himself, as it was in the first commit (sidecarType) ...

@jackwotherspoon
Copy link
Collaborator

@utmaks Thanks for raising this! 😄

Do you mind creating a new issue on this repo so we can get this addressed and fixed.

Thanks in advance.

@utmaks
Copy link

utmaks commented Nov 26, 2024

#641

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

tests: run Run all the tests for this PR

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants