Skip to content
Discussion options

You must be logged in to vote

There's nothing out-of-the-box, no. I would recommend that you try either using the Python debugger or wrapping components in some kind of timer.

Since you can get the components from the language pipeline, and since they're executed just using their __call__ function at inference time, you should be able to wrap them in a timer function. Something like this:

for name, pipe in nlp.pipeline:
    pipe.__call__ = timer(pipe.__call__)

Where timer is some kind of function that times calls while passing through arguments and return values.

Replies: 1 comment

Comment options

You must be logged in to vote
0 replies
Answer selected by narayanacharya6
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
perf / speed Performance: speed
2 participants