-
Notifications
You must be signed in to change notification settings - Fork 51
OLS-1847 - Adding test for quota limits #2558
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
OLS-1847 - Adding test for quota limits #2558
Conversation
323b911
to
4360a61
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
seems correct (as smoke tests)! TY
tests/e2e/test_api.py
Outdated
# assert that the available quota is | ||
# less than the initial one hardcoded in the olsconfig | ||
assert ( | ||
response.json()["available_quotas"]["UserQuotaLimiter"] < 11111 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
looks ok!
4360a61
to
8209978
Compare
/lgtm |
tests/e2e/test_api.py
Outdated
_, provider = metrics_utils.get_enabled_model_and_provider(pytest.metrics_client) | ||
cluster_utils.run_oc( | ||
[ | ||
"apply", | ||
"-f", | ||
f"tests/config/operator_install/olsconfig.crd.{provider}_quota.yaml", | ||
] | ||
) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@JoaoFula few questions.
- So far we have been adding a new test suite to test different config. Why are we not doing here the same ?
- Quota is not a provider specific feature, do we really need to test for all providers ?
- Where can I find the log for this test run ? How the artifacts being handled ?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
-
My way of thinking was that this would be quite a small change and wouldn't make sense to separate into another test suite but after your comment I remembered that's what we're doing with introspection so potentially adding a new mark and a new test suite with one provider will be enough.
-
The way I had, we would need to but if I change to a new test suite, I can limit it to one provider since, like you said, it shouldn't vary based on the provider
-
Since there isn't a new test suite, the logs are added to the logs for openai, azure_openai and watsonx test suites, like in https://gcsweb-ci.apps.ci.l2s4.p1.openshiftapps.com/gcs/test-platform-results/pr-logs/pull/openshift_lightspeed-service/2558/pull-ci-openshift-lightspeed-service-main-4.18-e2e-ols-cluster/1935597370033049600/artifacts/e2e-ols-cluster/e2e/artifacts/junit_e2e_openai.xml at the end of the test_api tests.
-
Likewise, the artifacts would be bundled together with the rest of the artifacts for that test suite. However, it seems like the pod log storing failed due to "sensitive information" https://gcsweb-ci.apps.ci.l2s4.p1.openshiftapps.com/gcs/test-platform-results/pr-logs/pull/openshift_lightspeed-service/2558/pull-ci-openshift-lightspeed-service-main-4.18-e2e-ols-cluster/1935597370033049600/artifacts/e2e-ols-cluster/e2e/artifacts/openai/cluster/podlogs/lightspeed-app-server-6c4df6554-w588j-lightspeed-service-api.log
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As a matter of coherence, perhaps adding a new test suite + marker will make more sense than the approach I followed here. After shift week, I'll revisit this PR. Thank you!
fccee17
to
c35042b
Compare
c35042b
to
4df940a
Compare
Adding test for quota limits Adding test for quota limits Adding test for quota limits
Changing test according to suggested changes
Removing try except that could obfuscate errors Adding verification of skipped test to fail run
9a8d4d3
to
903496b
Compare
Fixing test quota limits Changing url to use lightspeed teams one
903496b
to
b7e94ff
Compare
/retest |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If we test this just against openai, the configs for other providers can be removed?
Yeah, I added them there as a "if we want to cover it for other providers" but, for the time being, this is horizontal to all providers so I don't really care if we test with all or 1 provider. I also don't think it adds too much clutter to need removing but we can if you want. |
Let's remove it if it is not used. |
/lgtm |
/approve |
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: onmete The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
@JoaoFula: all tests passed! Full PR test history. Your PR dashboard. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. I understand the commands that are listed here. |
Description
Type of change
Related Tickets & Documents
Checklist before requesting a review
Testing