updating documentation and refresh the cache#432
updating documentation and refresh the cache#432Bala-Sakabattula wants to merge 5 commits intorelease-engineering:uat-instancefrom
Conversation
Review Summary by QodoAdd OAuth 2.0 authentication support with cache invalidation
WalkthroughsDescription• Add OAuth 2.0 authentication support alongside existing PAT method • Implement automatic token cache invalidation on auth failures • Update configuration examples and documentation for both auth methods • Add comprehensive test coverage for OAuth2 cache invalidation Diagramflowchart LR
A["Auth Config"] -->|"PAT or OAuth2"| B["build_jira_client_kwargs"]
B -->|"OAuth2"| C["Token Cache"]
D["JIRAError"] -->|"Invalidate"| E["invalidate_oauth2_cache_for_config"]
E -->|"Clear Cache"| C
C -->|"Next Request"| B
File Changes1. sync2jira/jira_auth.py
|
Code Review by Qodo
1.
|
|
Speaking of nits and code quality suggestions, both of Qodo's "Review recommended" comments above are valid:
|
|
@webbnh, I’ve added a test case to cover the new logic in the get_jira_client function when invalidate_oauth2_cache is set to True.
Could you please clarify which function you are referring to? In test_main, we already have a test case for invalidate_oauth2_cache_for_config, and there are existing test cases for get_jira_client as well. |
webbnh
left a comment
There was a problem hiding this comment.
I’ve added a test case to cover the new logic in the
get_jira_client functionwheninvalidate_oauth2_cacheis set toTrue.
Excellent! Thank you!
On the other hand, there is no unit test at all for that function
Could you please clarify which function you are referring to? In
test_main, we already have a test case forinvalidate_oauth2_cache_for_config, and there are existing test cases forget_jira_clientas well.
You are correct, I was mistaken. However, this results in a mystery: one of those tests should be driving coverage for a branch which is being reported as not covered. If you're game, it would be good to fix that test (I can see why it isn't driving the coverage, but I'm not seeing how it is passing.)
webbnh
left a comment
There was a problem hiding this comment.
Are you doing what you think you're doing? 🙂
| "oauth2": "invalid", # not a dict, so branch around token fetch is taken | ||
| } | ||
| with self.assertRaises(ValueError) as ctx: | ||
| build_jira_client_kwargs(config) |
There was a problem hiding this comment.
Is this test supposed to be targeting invalidate_oauth2_cache_for_config() rather than build_jira_client_kwargs()?
There was a problem hiding this comment.
But it is not raising any exception to be checked just normal return and also in the following test case in this line we are checking the calling of invalidate_oauth2_cache_for_config() with invalid oauth2. We don't have exception to check so we are asserting that we are calling that function.
There was a problem hiding this comment.
Normally, when we give a function bad inputs, it reports an error, and so we devise a test which supplies such inputs and ensures that the error occurs. However, in this case, the CUT is intended to report no errors when we give it bad inputs, and so we should have a test which ensures that no exception is raised when the bad inputs are supplied. And, as you've observed, such a test would not include any explicit assertions, because none are required (and, in fact, none can be made); it would rely instead on the test framework reporting a failure if any error were "unexpectedly" raised, because, that, in fact, would be the case.
|
Just to clarify whether we want to explicitly test the following condition: oauth2_cfg = jira_instance_config.get("oauth2")
if not oauth2_cfg or not isinstance(oauth2_cfg, dict):
returnIn this case, the function simply returns early and does not raise an exception or produce any observable behavior. If we add a test for this, it would look something like: def test_jira_auth_invalidate_oauth2_no_oauth2_or_not_dict(self):
"""invalidate_oauth2_cache_for_config returns early when oauth2 is missing or not a dict."""
for config in [
{},
{"options": {}},
{"oauth2": None},
{"oauth2": "not_a_dict"},
]:
invalidate_oauth2_cache_for_config(config)However, since there is no exception or return value to assert, the test would mainly ensure that the function executes without errors (essentially for coverage). In the tagged test case as well, we are only asserting that the function is called. Just checking if adding this test is still required, or if it can be skipped since it primarily increases coverage without validating specific behavior. |
It is validating the absence of specific behavior, which is, in effect, validating specific behavior: it is showing that, if you call the function with a bad configuration, it still returns normally. And, there is nothing wrong with increasing coverage when the cost to do so is this low.
It's not required -- this PR has been approved. Alternatively, as I said in Slack, you could just remove the uncovered code, instead of adding a test for it: if we don't mind crashing in that case, or if we're unwilling to test it, we would probably be better off if that check weren't there.
That looks like an excellent test. 🙂 |
Updated the documentation and Refresh the cache .