Skip to content

Conversation

@gkumbhat
Copy link
Collaborator

@gkumbhat gkumbhat commented Jan 31, 2025

Changes

  • Pin vllm dependency to make them installable on mac m1.
  • Add detector dispatcher decorator to make it cleaner to call out to different detector specific functions
  • Add detector type enum
  • Replace preprocessing function with use of decorator
  • Add unit test for the decorator

@gkumbhat gkumbhat changed the title Detector dispatcher deco 2 Detector dispatcher decorator Jan 31, 2025
@gkumbhat gkumbhat mentioned this pull request Jan 31, 2025
@gkumbhat gkumbhat requested a review from evaline-ju January 31, 2025 16:57
Copy link
Collaborator

@evaline-ju evaline-ju left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

A couple q's/comments!


def request_to_chat_completion_request(
@detector_dispatcher(types=[DetectorType.TEXT_CONTEXT_DOC])
def _request_to_chat_completion_request(
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: if we're keeping this "private" since this only applies to granite guardian, maybe this should go in the "Private functions" section above?

Copy link
Collaborator Author

@gkumbhat gkumbhat Jan 31, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I kept these "internal", since there were unit tests for this particular function, so testing them after making them private would get interesting..

But might still make sense to move them to private block and change name of that block

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ah I misread the number of underscores, it might be good to have an "internal functions" block then

def f(x):
pass
@detector_dispatcher(types["bar"])
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's a bit unclear to me if multiple types are allowed (due to the list) but I assume not if the pattern is to re-implement the function working on the different type. If so, can this be made explicit? Otherwise I might have misunderstood the pattern

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

multiple types are allowed, if they follow same implementation.. But this is to basically handle cases, for example, when we have 3 types, 2 of which have same implementation but 1 have different implementation.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

My worry with the multiple types had been some of the assumption of type attributes, for example the request.detector_params exists for most of our request types today, but there are other request attributes that will not be common. I assume then we will have to carefully type hint like:

@detector_dispatcher(types["foo", "bar"])
def f(x: Union[foo, bar]):
        pass

but then it might be more difficult to ensure testing? (i.e. function coverage then won't be enough, we'd need to make sure there's coverage for each type?)

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If I am understanding the concern correctly, each of the function can have different type and different number of arguments. So in that sense, this approach shouldn't assume or block any of that and actually should allow us to more cleanly separate those type of functions 🤔

Copy link
Collaborator

@evaline-ju evaline-ju Jan 31, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hm but as I understand it, the idea with multiple types would allow for consolidation on the same function names. Otherwise, if we're separating out the functions anyway, then devs can potentially just write functions per type and not need the decorator. So we would essentially encourage the re-use of the same function names. My concern there is how to make sure that those are tested with all the supported types (with the same function name) in case there are attribute differences.

return request

def preprocess_chat_request(
##### General chat completion output processing functions ##################
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This section header doesn't seem quite right here, since the function below preprocess_request is not related to output processing

gkumbhat and others added 2 commits January 31, 2025 12:12
Signed-off-by: Gaurav-Kumbhat <[email protected]>
Co-authored-by: Evaline Ju <[email protected]>
Signed-off-by: Gaurav-Kumbhat <[email protected]>
@gkumbhat gkumbhat force-pushed the detector_dispatcher_deco_2 branch from 85196c1 to 13f8a4c Compare January 31, 2025 18:25
pyproject.toml Outdated

dependencies = [
"vllm>=0.7.0"
"vllm @ git+https://github.com/vllm-project/[email protected] ; sys_platform == 'darwin'",
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Seems 0.7.1 was released over the weekend https://github.com/vllm-project/vllm/releases/tag/v0.7.1 and unfortunately this tag method is a bit less flexible to non-breaking updates

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Its tricky to get the installation via git work with operations like >. We might have to create some sort of automation to update it when newer versions are available. But that seems a bit overkill at the moment. So I suggest to update the version for mac manually for now.

Signed-off-by: Gaurav-Kumbhat <[email protected]>
Copy link
Collaborator

@evaline-ju evaline-ju left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM, small note but does not require change

dependencies = [
"vllm>=0.7.0"
"vllm @ git+https://github.com/vllm-project/[email protected] ; sys_platform == 'darwin'",
"vllm>=0.7.1 ; sys_platform != 'darwin'",
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think this one strictly needed to change since it should be backwards compatible, but likely not an issue for current usage

@gkumbhat gkumbhat merged commit a435f31 into foundation-model-stack:main Feb 3, 2025
3 checks passed
@gkumbhat gkumbhat deleted the detector_dispatcher_deco_2 branch February 3, 2025 16:33
@evaline-ju evaline-ju linked an issue Feb 3, 2025 that may be closed by this pull request
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Exploration of function decorator for common adapter functions

2 participants