-
Notifications
You must be signed in to change notification settings - Fork 21
WIP: Add support for AWS CI runners #12
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
217fe1f to
686c66c
Compare
|
@lool - this is probably going to be close to what i'll try and get merged. It would be good to get your initial feedback. Ignore how long the workflow takes. Its a known network configuration issue the IT team is working on fixing. |
|
@doanac IIUC, in theory this should pretty much be the same as before, except running in AWS instead of GCP The yaml looks fine, I see the arm64 runner is named something arm64, which is good; I wonder how the amd64 ones be named? It would be nice if they have amd64 in the names. The path to the fileserver has changed, but it's pretty much transparent here. The only difficulty I have is in accessing the build log. NB: I currently don't have a Qualcomm AWS account. My dream would be:
|
|
All the build logs are viewable except for the "AWS CodeBuild" log. That's a known issue they are trying to address but its content have nothing to do with our build and can be ignored. |
|
Ah nevermind, indeed, github log is just there :) Comparing the two, I guess we'll have the same expected github features and UX. Looking at build performance, the AWS runners are 30% slower in the largest step of building the image, but the artifacts upload phase is 6x faster. Overall the build was 15% slower, which is acceptable. What's the plan, do you want us to run these builds next to the github runner ones, or should we switch the default to the AWS ones when we're ready? I'm landing a few changes related to fileserver in the RB1 pull request which I would really like to land, but otherwise happy to start using AWS arm64 runners. |
|
Forgot to ask: do we have access to all instance types? I'm curious if there's one with nested virt that would perform better than QEMU for this particular workflow. |
|
I don't know enough about AWS machinery to know if its possible to do nested virt. I'm doubtful. On performance - most of the slowness is network I/O. They are routing traffic in an inefficient way but know what to change to make that better. I think they'll be almost the same. On file upload - it happens during a "magic" step that's async to this - so there's no real performance number you'll get on this right now. I was thinking you could run them side by side for a bit to get confidence in it. You'll also need them side by side because LAVA isn't yet ready to use their output. However, it does duplicate code during the interim. |
This is an exact copy of debos.yml so that we can have a clear view of what is being changed for AWS while also making it easier to rebase onto future changes that might happen while this is being reviewed and tested. Signed-off-by: Andy Doan <[email protected]>
Signed-off-by: Andy Doan <[email protected]>
Signed-off-by: Andy Doan <[email protected]>
|
Thanks for refreshing! As you saw, workflows are seeing somewhat large updates; there's another effort with LAVA CI that will require shuffling things a bit, and then I think it will be quieter at least on the image workflows for a while. When I looked at the first proposal, the differences between the GCP hosted runners and the AWS ones were very minor (IIRC, basically the tags to select runners and the path to to volumes passed to the container image), so it should be easy to forward port this at that point. (Do let me know if you think we should merge this now as to give it enough exposure though) |
|
its close but I found out on Friday there's a new builder they are going to give us that should be a little better than this on. I'm waiting for the info on that and then I'll let you know |
No description provided.