-
-
Notifications
You must be signed in to change notification settings - Fork 33.1k
gh-91555: disable logger while handling log record #131812
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Changes from 1 commit
e9f6079
5a89172
7b68d12
bc4bd11
99788f7
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -1459,6 +1459,9 @@ def _clear_cache(self): | |
# Logger classes and functions | ||
#--------------------------------------------------------------------------- | ||
|
||
_tls = threading.local() | ||
_tls.in_progress = False | ||
|
||
class Logger(Filterer): | ||
""" | ||
Instances of the Logger class represent a single logging channel. A | ||
|
@@ -1670,14 +1673,19 @@ def handle(self, record): | |
This method is used for unpickled records received from a socket, as | ||
well as those created locally. Logger-level filtering is applied. | ||
""" | ||
if self.disabled: | ||
return | ||
maybe_record = self.filter(record) | ||
if not maybe_record: | ||
if self._is_disabled(): | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. rather than disabling the logging, can we instead append the record to a maybe_record = self.filter(record)
if not maybe_record:
return
if isinstance(maybe_record, LogRecord):
record = maybe_record
was_calling_handlers = set_calling_handlers()
try:
if not was_calling_handlers:
self.callHandlers(record)
while True:
try:
record = self._reentrant_records.popleft()
except IndexError:
return
self.callHandlers(record)
else:
self._reentrant_records.append(record)
finally:
set_not_calling_handlers() There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. This will still produce a stack overflow if handling the deferred log message itself logs another message There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I'm not following how this would cause a stack overflow, if handling the log message logs another message it would go onto the _reentrant_records queue, and then be processed later once the stack returns all the way back to where There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more.
Sorry, I should have said deadlock with the current example. The stack overflow is from a different way of triggering this (see the second unit test added). The trouble is that when the first recursive logging call exits the That can be avoided for the original triggering example by only clearing the "handling" flag if it was initially unset (the deferred records collection also needs to be TLS not a member variable). It ends up looking something like this: if not hasattr(self._tls, 'reentrant_records'):
self._tls.reentrant_records = deque()
deferred = self._tls.reentrant_records
was_calling_handlers = self.set_calling_handlers()
try:
if not was_calling_handlers:
self.callHandlers(record)
while deferred:
self.callHandlers(deferred.popleft())
else:
deferred.append(record)
finally:
if not was_calling_handlers:
self.set_not_calling_handlers() This fixes the two bugs, which only log the first time they try to process a log record (and means those recursive log messages are logged and not silently ignored, which is nice). However a different example which logs every time (such as the second unit test) will still live-lock and never exit that There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more.
Does the system keep logging forever instead? This seems acceptable as you'd easily track down this bug just by looking at the logs There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more.
Yep, it will just sit in a tight loop spamming the log forever, or at least until/unless that exhausts disk space or wherever the logs are actually going.
IMO it is not a great failure mode, but it will certainly be obvious! FWIW I think I prefer ignoring them: the code is much simpler and it prevents the issue in non-trivial handler implementations like Sentry's (that would otherwise trigger the live-lock failure). I was hoping this fix would mean they would be able to remove that nasty monkey-patching on supported versions. OTOH it is definitely nice to actually handle instead of drop the recursive log messages, in cases where they don't always continue to recurse. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. we could have a There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. We could. Another alternative would be Victor Stinner's suggestion in the discourse discussion to raise an exception. That would bring it to the user's attention and force them to deal with it. Ultimately, though, the way they will have to deal with it is by preventing, disabling, or otherwise intercepting and ignoring all such logging. That will be difficult to do reliably outside the core, will likely be overlooked unless/until it bites, and have to be done in every susceptible log handler or application that uses such. IMO it would be better for us to do this once, centrally, with a small, simple, and robust fix. |
||
return | ||
if isinstance(maybe_record, LogRecord): | ||
record = maybe_record | ||
self.callHandlers(record) | ||
|
||
_tls.in_progress = True | ||
duaneg marked this conversation as resolved.
Outdated
Show resolved
Hide resolved
|
||
try: | ||
maybe_record = self.filter(record) | ||
duaneg marked this conversation as resolved.
Show resolved
Hide resolved
|
||
if not maybe_record: | ||
return | ||
if isinstance(maybe_record, LogRecord): | ||
record = maybe_record | ||
self.callHandlers(record) | ||
finally: | ||
_tls.in_progress = False | ||
|
||
def addHandler(self, hdlr): | ||
""" | ||
|
@@ -1765,7 +1773,7 @@ def isEnabledFor(self, level): | |
""" | ||
Is this logger enabled for level 'level'? | ||
""" | ||
if self.disabled: | ||
if self._is_disabled(): | ||
return False | ||
|
||
try: | ||
|
@@ -1815,6 +1823,9 @@ def _hierlevel(logger): | |
if isinstance(item, Logger) and item.parent is self and | ||
_hierlevel(item) == 1 + _hierlevel(item.parent)) | ||
|
||
def _is_disabled(self): | ||
return self.disabled or getattr(_tls, 'in_progress', False) | ||
duaneg marked this conversation as resolved.
Outdated
Show resolved
Hide resolved
|
||
|
||
def __repr__(self): | ||
level = getLevelName(self.getEffectiveLevel()) | ||
return '<%s %s (%s)>' % (self.__class__.__name__, self.name, level) | ||
|
Uh oh!
There was an error while loading. Please reload this page.