Skip to content

Conversation

@Zhang-Yang-Sustech
Copy link
Contributor

EfficientSAM Model Demonstration

I have successfully written the code for the EfficientSAM model and created an interactive demo that enables normal model inference and user interaction. Users can easily use this demo via the command line with the following command:

python demo.py --input /path/to/your/image.jpg

By simply providing the path to the image they wish to segment, the demo will display the image. Upon clicking the object they want to segment in the image, a new window will show the segmentation result, making it user-friendly and straightforward.

Example

example1

Please note that the current model inference speed takes approximately 2 seconds. The camera demo is not yet completed, and I plan to optimize it and add camera functionality in the future.

Performance Testing

The model's benchmark testing has not been conducted yet. I will perform the tests in the future and provide the relevant performance data.


Thank you for your attention. I will continue to update and optimize the EfficientSAM model and its demonstration.

@fengyuentau fengyuentau self-assigned this May 17, 2024
@fengyuentau fengyuentau added the add model request to add a new model label May 17, 2024
Copy link
Member

@fengyuentau fengyuentau left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you for contribution! Please have a look at my comments below.

Comment on lines 3 to 6
EfficientSAM: Leveraged Masked Image Pretraining for Efficient Segment Anything

Notes:
-
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The original repo offers EfficientSAM-S and -Ti. Which one is used here? Add a note to describe including version (shasum, md5sum, etc.). Also describe how you convert the model to ONNX (a script would be nice).

Also need to describe how many clicks are required.

Copy link
Member

@WanliZhong WanliZhong left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good job👍! Thanks for your contribution. You can check the comment I left. BTW, the benchmark and quantized model part is expected if possible.

@Zhang-Yang-Sustech
Copy link
Contributor Author

now this pr is only about model and demo, no benchmark

@fengyuentau fengyuentau added this to the 4.10.0 milestone Jun 3, 2024
Copy link
Member

@fengyuentau fengyuentau left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please fix the following issue and we are going to merge this PR for 4.10.0 release.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You can drop changes in file for now as it affects the benchmark.

Copy link
Member

@WanliZhong WanliZhong left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM!

@WanliZhong WanliZhong marked this pull request as ready for review June 4, 2024 07:15
@fengyuentau fengyuentau merged commit aa351d2 into opencv:main Jun 4, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

add model request to add a new model

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants