Skip to content

Conversation

ZachCafego
Copy link
Contributor

@ZachCafego ZachCafego commented Jan 10, 2025

Copy link
Member

@jrobble jrobble left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Reviewable status: 0 of 29 files reviewed, 4 unresolved discussions (waiting on @ZachCafego)


python/LlavaDetection/llava_component/llava_component.py line 297 at r6 (raw file):

                    else:
                        idx += config.frames_per_second_to_process
                        while (idx not in job_feed_forward.frame_locations) and (idx <= max(job_feed_forward.frame_locations)):

Any changes you make this this function to fix this logic also need to be applied to _get_feed_forward_detections_json()


python/LlavaDetection/tests/test_llava.py line 59 at r6 (raw file):

        mock_container = MagicMock()
        with unittest.mock.patch("ollama.Client", return_value=mock_container):
            mock_container.generate = Mock(side_effect=side_effect_function)

Lines like encoded = self._encode_image(frame) waste time (especially when run on the CPU) because the test doesn't actually do anything with the encoded image. Mock _encode_image to be a no-op / return dummy data.


python/LlavaDetection/tests/test_llava.py line 375 at r6 (raw file):

        
        for key, value in result.detection_properties.items():
            self.assertTrue(value.strip().lower() != 'unsure')

Only check properties that start with "LLAVA". Skip ANNOTATED BY LLAVA since that is a bool and this code throws an exception when it tries to strip() it.


python/LlavaDetection/llava_component/llava_component.py line 310 at r4 (raw file):

                    self._get_ollama_response(self.class_prompts[classification], frame, ff_location.detection_properties, video_process_timer)

                idx += 1

Removing this results in idx never incrementing for normal (non-nth-frame) processing because config.frames_per_second_to_process > 0 is always false.

Copy link
Contributor Author

@ZachCafego ZachCafego left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Reviewable status: 0 of 29 files reviewed, 4 unresolved discussions (waiting on @jrobble and @ZachCafego)


python/LlavaDetection/llava_component/llava_component.py line 310 at r4 (raw file):

Previously, jrobble (Jeff Robble) wrote…

Removing this results in idx never incrementing for normal (non-nth-frame) processing because config.frames_per_second_to_process > 0 is always false.

I added logic so that it does the increment on full frame images and not also on nth frame jobs.


python/LlavaDetection/llava_component/llava_component.py line 297 at r6 (raw file):

Previously, jrobble (Jeff Robble) wrote…

Any changes you make this this function to fix this logic also need to be applied to _get_feed_forward_detections_json()

All changes have been made to the other function.


python/LlavaDetection/tests/test_llava.py line 59 at r6 (raw file):

Previously, jrobble (Jeff Robble) wrote…

Lines like encoded = self._encode_image(frame) waste time (especially when run on the CPU) because the test doesn't actually do anything with the encoded image. Mock _encode_image to be a no-op / return dummy data.

Done.


python/LlavaDetection/tests/test_llava.py line 375 at r6 (raw file):

Previously, jrobble (Jeff Robble) wrote…

Only check properties that start with "LLAVA". Skip ANNOTATED BY LLAVA since that is a bool and this code throws an exception when it tries to strip() it.

Done.

@jrobble
Copy link
Member

jrobble commented Mar 14, 2025

python/LlavaDetection/triton_server/Dockerfile line 1 at r8 (raw file):

# RUN apt-get update && apt-get -y install git git-lfs

Can we remove the entire triton_server directory? Is it used?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants