-
Notifications
You must be signed in to change notification settings - Fork 7
Detector dispatcher decorator #18
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Detector dispatcher decorator #18
Conversation
Signed-off-by: Gaurav-Kumbhat <[email protected]>
Signed-off-by: Gaurav-Kumbhat <[email protected]>
Signed-off-by: Gaurav-Kumbhat <[email protected]>
Signed-off-by: Gaurav-Kumbhat <[email protected]>
Signed-off-by: Gaurav-Kumbhat <[email protected]>
evaline-ju
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
A couple q's/comments!
|
|
||
| def request_to_chat_completion_request( | ||
| @detector_dispatcher(types=[DetectorType.TEXT_CONTEXT_DOC]) | ||
| def _request_to_chat_completion_request( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: if we're keeping this "private" since this only applies to granite guardian, maybe this should go in the "Private functions" section above?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I kept these "internal", since there were unit tests for this particular function, so testing them after making them private would get interesting..
But might still make sense to move them to private block and change name of that block
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ah I misread the number of underscores, it might be good to have an "internal functions" block then
| def f(x): | ||
| pass | ||
| @detector_dispatcher(types["bar"]) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's a bit unclear to me if multiple types are allowed (due to the list) but I assume not if the pattern is to re-implement the function working on the different type. If so, can this be made explicit? Otherwise I might have misunderstood the pattern
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
multiple types are allowed, if they follow same implementation.. But this is to basically handle cases, for example, when we have 3 types, 2 of which have same implementation but 1 have different implementation.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
My worry with the multiple types had been some of the assumption of type attributes, for example the request.detector_params exists for most of our request types today, but there are other request attributes that will not be common. I assume then we will have to carefully type hint like:
@detector_dispatcher(types["foo", "bar"])
def f(x: Union[foo, bar]):
pass
but then it might be more difficult to ensure testing? (i.e. function coverage then won't be enough, we'd need to make sure there's coverage for each type?)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If I am understanding the concern correctly, each of the function can have different type and different number of arguments. So in that sense, this approach shouldn't assume or block any of that and actually should allow us to more cleanly separate those type of functions 🤔
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hm but as I understand it, the idea with multiple types would allow for consolidation on the same function names. Otherwise, if we're separating out the functions anyway, then devs can potentially just write functions per type and not need the decorator. So we would essentially encourage the re-use of the same function names. My concern there is how to make sure that those are tested with all the supported types (with the same function name) in case there are attribute differences.
| return request | ||
|
|
||
| def preprocess_chat_request( | ||
| ##### General chat completion output processing functions ################## |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This section header doesn't seem quite right here, since the function below preprocess_request is not related to output processing
Signed-off-by: Gaurav-Kumbhat <[email protected]>
Co-authored-by: Evaline Ju <[email protected]> Signed-off-by: Gaurav-Kumbhat <[email protected]>
85196c1 to
13f8a4c
Compare
Signed-off-by: Gaurav-Kumbhat <[email protected]>
pyproject.toml
Outdated
|
|
||
| dependencies = [ | ||
| "vllm>=0.7.0" | ||
| "vllm @ git+https://github.com/vllm-project/[email protected] ; sys_platform == 'darwin'", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Seems 0.7.1 was released over the weekend https://github.com/vllm-project/vllm/releases/tag/v0.7.1 and unfortunately this tag method is a bit less flexible to non-breaking updates
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Its tricky to get the installation via git work with operations like >. We might have to create some sort of automation to update it when newer versions are available. But that seems a bit overkill at the moment. So I suggest to update the version for mac manually for now.
Signed-off-by: Gaurav-Kumbhat <[email protected]>
evaline-ju
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM, small note but does not require change
| dependencies = [ | ||
| "vllm>=0.7.0" | ||
| "vllm @ git+https://github.com/vllm-project/[email protected] ; sys_platform == 'darwin'", | ||
| "vllm>=0.7.1 ; sys_platform != 'darwin'", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't think this one strictly needed to change since it should be backwards compatible, but likely not an issue for current usage
Changes