Conversation
rely on backoff to manage exceptions and retry
There was a problem hiding this comment.
Pull Request Overview
This PR updates the SPARQL query execution function to rely on the @wbi_backoff() decorator for retry logic instead of implementing custom retry loops. The change removes the manual retry loop and max_retries parameter, delegating retry management to the backoff decorator.
Key changes:
- Removed manual retry loop and
max_retriesparameter fromexecute_sparql_query - Simplified exception handling to rely on backoff decorator for retries
- Updated error handling to raise exceptions that can be caught by the backoff mechanism
| try: | ||
| response = helpers_session.post(sparql_endpoint_url, params=params, headers=headers) | ||
| except BaseException as e: | ||
| if config['BACKOFF_MAX_TRIES'] > 1: | ||
| sleep(retry_after) | ||
| continue | ||
| raise e | ||
| else: | ||
| if response.status_code in (500, 502, 503, 504): |
There was a problem hiding this comment.
Catching BaseException is overly broad and will catch system exit signals like KeyboardInterrupt and SystemExit. This should be Exception instead to avoid interfering with system-level exceptions.
| except BaseException as e: | ||
| if config['BACKOFF_MAX_TRIES'] > 1: | ||
| sleep(retry_after) | ||
| continue | ||
| raise e |
There was a problem hiding this comment.
The sleep operation defeats the purpose of using the backoff decorator. The backoff decorator should handle all timing delays, so this manual sleep should be removed to let backoff manage retry timing properly.
| raise Exception("Service unavailable (HTTP Code %d)." % (response.status_code)) | ||
| if response.status_code == 429: | ||
| if 'retry-after' in response.headers.keys(): | ||
| retry_after = int(response.headers['retry-after']) | ||
| log.error("Too Many Requests (429). Sleeping for %d seconds", retry_after) | ||
| sleep(retry_after) | ||
| continue | ||
| response.raise_for_status() | ||
| raise Exception("Too Many Requests (429).") |
There was a problem hiding this comment.
Using generic Exception reduces error handling precision. Consider using a more specific exception type like requests.HTTPError or creating a custom exception class for HTTP service errors.
| sleep(retry_after) | ||
| continue | ||
| response.raise_for_status() | ||
| raise Exception("Too Many Requests (429).") |
There was a problem hiding this comment.
Using generic Exception reduces error handling precision. Consider using a more specific exception type like requests.HTTPError or creating a custom exception class for rate limiting errors.
| if config['BACKOFF_MAX_TRIES'] > 1: | ||
| sleep(retry_after) | ||
| continue | ||
| raise e | ||
| else: | ||
| if response.status_code in (500, 502, 503, 504): | ||
| log.error("Service unavailable (HTTP Code %d). Sleeping for %d seconds.", response.status_code, retry_after) | ||
| sleep(retry_after) | ||
| continue | ||
| raise Exception("Service unavailable (HTTP Code %d)." % (response.status_code)) | ||
| if response.status_code == 429: | ||
| if 'retry-after' in response.headers.keys(): | ||
| retry_after = int(response.headers['retry-after']) | ||
| log.error("Too Many Requests (429). Sleeping for %d seconds", retry_after) | ||
| sleep(retry_after) | ||
| continue | ||
| response.raise_for_status() | ||
| raise Exception("Too Many Requests (429).") |
There was a problem hiding this comment.
Manual sleep calls before raising exceptions will cause double delays when combined with the backoff decorator. The backoff decorator should handle all retry timing, so these sleep calls should be removed.
Rely on backoff to manage exceptions and retry.
Adresses #453