Replies: 1 comment 3 replies
-
Thanks for the feedback of your testing! To be fair, I do not care about quality gates now, because it requires to do a lot of tasks, and it will take much time, and benefit will be an additional step of building pipeline. You are enjoying new features development, but personally I have 40 open pull requests and ~450 open issues now. And nobody helps me. Finally, I would prefer to recover old indicators first, make them working, after that we can plan migration or additional integration of extra-analyzers. Sounds like a good plan? |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
Expected Behavior / New Feature
It would be great to have feedback from a static analysis tool (e.g. sonarqube) when creating new PRs. With predefined quality gates (80% coverage on new code, no bugs, security reviewed), this would make life easier for reviewers. Reviewers wouldn't review any code until the quality gate is reached.
Actual Behavior / Motivation for New Feature
Some PRs are pushed without unit and acceptance tests, which means a lot of work for code reviewers. Code quality is not verified either, and standards are not followed.
Specifications
I have written a github action for sonarqube - a fantastic tool made in Switzlerand - and this could be reused here... 😸
Coverage and Code quality can be improved: https://sonarcloud.io/summary/overall?id=ggnaegi_Ocelot. - we are not far from 80%, since some files shouldn't be part of analysis -.
So, please @TomPallister, @raman-m, @RaynaldM, let's use sonarqube 😄
Beta Was this translation helpful? Give feedback.
All reactions