-
Notifications
You must be signed in to change notification settings - Fork 260
Migrating the EfficientSAM model to the OpenCV model zoo #258
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Merged
Merged
Changes from 2 commits
Commits
Show all changes
16 commits
Select commit
Hold shift + click to select a range
11fb27e
a
Zhang-Yang-Sustech 6e27053
add efficientsam model and basic demo
Zhang-Yang-Sustech dc3f586
update license
Zhang-Yang-Sustech 691a559
remove example images
Zhang-Yang-Sustech a5cc02a
update readme
Zhang-Yang-Sustech b0d9d3b
update readme
Zhang-Yang-Sustech ffb1bf4
update demo
Zhang-Yang-Sustech a48e3f5
update demo
Zhang-Yang-Sustech be74b65
update readme
Zhang-Yang-Sustech 7adcf81
update SAM and __init__
Zhang-Yang-Sustech 3a0ff63
update demo and sam
Zhang-Yang-Sustech 7d86141
update label
Zhang-Yang-Sustech 52fb290
add present gif
Zhang-Yang-Sustech d5bc0ce
update readme
Zhang-Yang-Sustech 073464f
add efficientSAM gif to readme of opencvzoo
Zhang-Yang-Sustech 6130312
cv version 4.10.0, remove camera branch
Zhang-Yang-Sustech File filter
Filter by extension
Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
There are no files selected for viewing
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,21 @@ | ||
| MIT License | ||
|
|
||
| Copyright (c) 2024 Zhang-Yang-Sustech | ||
|
|
||
| Permission is hereby granted, free of charge, to any person obtaining a copy | ||
| of this software and associated documentation files (the "Software"), to deal | ||
| in the Software without restriction, including without limitation the rights | ||
| to use, copy, modify, merge, publish, distribute, sublicense, and/or sell | ||
| copies of the Software, and to permit persons to whom the Software is | ||
| furnished to do so, subject to the following conditions: | ||
|
|
||
| The above copyright notice and this permission notice shall be included in all | ||
| copies or substantial portions of the Software. | ||
|
|
||
| THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR | ||
| IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, | ||
| FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE | ||
| AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER | ||
| LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, | ||
| OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE | ||
| SOFTWARE. |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,37 @@ | ||
| # image_segmentation_efficientsam | ||
|
|
||
| EfficientSAM: Leveraged Masked Image Pretraining for Efficient Segment Anything | ||
|
|
||
| Notes: | ||
| - | ||
|
||
|
|
||
| ## Demo | ||
|
|
||
| ### Python | ||
| Run the following command to try the demo: | ||
|
|
||
| ```shell | ||
| python demo.py --input /path/to/image | ||
| ``` | ||
|
|
||
| ### C++ | ||
WanliZhong marked this conversation as resolved.
Outdated
Show resolved
Hide resolved
|
||
|
|
||
| ## Result | ||
|
|
||
| Here are some of the sample results that were observed using the model: | ||
|
|
||
|  | ||
|  | ||
|
|
||
| ## Model metrics: | ||
|
|
||
| ## License | ||
|
|
||
| All files in this directory are licensed under [Apache 2.0 License](./LICENSE). | ||
|
|
||
| #### Contributor Details | ||
|
|
||
| ## Reference | ||
|
|
||
| - https://arxiv.org/abs/2312.00863 | ||
| - https://github.com/yformer/EfficientSAM | ||
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,137 @@ | ||
| import argparse | ||
| import numpy as np | ||
| import cv2 as cv | ||
| from efficientSAM import EfficientSam | ||
|
|
||
| # Check OpenCV version | ||
| assert cv.__version__ >= "4.9.0", \ | ||
| "Please install latest opencv-python to try this demo: python3 -m pip install --upgrade opencv-python" | ||
WanliZhong marked this conversation as resolved.
Outdated
Show resolved
Hide resolved
|
||
|
|
||
| parser = argparse.ArgumentParser(description='EfficientSAM Demo') | ||
| parser.add_argument('--input', '-i', type=str, | ||
| help='Set input path to a certain image, omit if using camera.') | ||
| parser.add_argument('--model', '-m', type=str, default='image_segmentation_efficientsam_ti_2024may.onnx', | ||
| help='Set model path, defaults to image_segmentation_efficientsam_ti_2024may.onnx.') | ||
| parser.add_argument('--save', '-s', action='store_true', | ||
| help='Specify to save a file with results. Invalid in case of camera input.') | ||
| args = parser.parse_args() | ||
WanliZhong marked this conversation as resolved.
Show resolved
Hide resolved
|
||
|
|
||
| #global click listener | ||
| clicked_left = False | ||
| #global point record in the window | ||
| point = [] | ||
|
|
||
| def visualize(image, result): | ||
| """ | ||
| Visualize the inference result on the input image. | ||
|
|
||
| Args: | ||
| image (np.ndarray): The input image. | ||
| result (np.ndarray): The inference result. | ||
|
|
||
| Returns: | ||
| vis_result (np.ndarray): The visualized result. | ||
| """ | ||
| # get image and mask | ||
| vis_result = np.copy(image) | ||
| mask = np.copy(result) | ||
| # change mask to binary image | ||
| t, binary = cv.threshold(mask, 127, 255, cv.THRESH_BINARY) | ||
| assert set(np.unique(binary)) <= {0, 255}, "The mask must be a binary image" | ||
| # enhance red channel to make the segmentation more obviously | ||
| enhancement_factor = 1.8 | ||
| red_channel = vis_result[:, :, 2] | ||
| # update the channel | ||
| red_channel = np.where(binary == 255, np.minimum(red_channel * enhancement_factor, 255), red_channel) | ||
| vis_result[:, :, 2] = red_channel | ||
|
|
||
| # draw borders | ||
| contours, hierarchy = cv.findContours(binary, cv.RETR_LIST ,cv.CHAIN_APPROX_TC89_L1) | ||
| cv.drawContours(vis_result, contours, contourIdx = -1, color = (255,255,255), thickness=2, ) | ||
WanliZhong marked this conversation as resolved.
Outdated
Show resolved
Hide resolved
|
||
| return vis_result | ||
|
|
||
| def select(event, x, y, flags, param): | ||
| global clicked_left | ||
| # When the left mouse button is pressed, record the coordinates of the point where it is pressed | ||
| if event == cv.EVENT_LBUTTONUP: | ||
| point.append([x,y]) | ||
| print("point:",point[0]) | ||
| clicked_left = True | ||
|
|
||
| if __name__ == '__main__': | ||
| # Load the EfficientSAM model | ||
| model = EfficientSam(modelPath=args.model) | ||
|
|
||
| if args.input is not None: | ||
| # Read image | ||
| image = cv.imread(args.input) | ||
| if image is None: | ||
| print('Could not open or find the image:', args.input) | ||
| exit(0) | ||
| # create window | ||
| image_window = "image: click on the thing whick you want to segment!" | ||
| cv.namedWindow(image_window, cv.WINDOW_NORMAL) | ||
| # change window size | ||
| cv.resizeWindow(image_window, 800 if image.shape[0] > 800 else image.shape[0], 600 if image.shape[1] > 600 else image.shape[1]) | ||
| # put the window on the left of the screen | ||
| cv.moveWindow(image_window, 50, 100) | ||
| # set listener to record user's click point | ||
| cv.setMouseCallback(image_window, select) | ||
| # tips in the terminal | ||
| print("click the picture on the LEFT and see the result on the RIGHT!") | ||
| # show image | ||
| cv.imshow(image_window, image) | ||
| # waiting for click | ||
| while cv.waitKey(1) == -1 or clicked_left: | ||
| # receive click | ||
| if clicked_left: | ||
| # put the click point (x,y) into the model to predict | ||
| result = model.infer(image=image, points=point, lables=[1]) | ||
| # get the visualized result | ||
| vis_result = visualize(image, result) | ||
| # create window to show visualized result | ||
| cv.namedWindow("vis_result", cv.WINDOW_NORMAL) | ||
| cv.resizeWindow("vis_result", 800 if vis_result.shape[0] > 800 else vis_result.shape[0], 600 if vis_result.shape[1] > 600 else vis_result.shape[1]) | ||
| cv.moveWindow("vis_result", 851, 100) | ||
| cv.imshow("vis_result", vis_result) | ||
| # set click false to listen another click | ||
| clicked_left = False | ||
| elif cv.getWindowProperty(image_window, cv.WND_PROP_VISIBLE) < 1: | ||
| # if click × to close the image window then ending | ||
| break | ||
| else: | ||
| # when not clicked, set point to empty | ||
| point = [] | ||
| cv.destroyAllWindows() | ||
|
|
||
| # Save results if save is true | ||
| if args.save: | ||
| cv.imwrite('./example_outputs/vis_result.jpg', vis_result) | ||
| cv.imwrite("./example_outputs/mask.jpg", result) | ||
| print('vis_result.jpg and mask.jpg are saved to ./example_outputs/') | ||
|
|
||
|
|
||
| else: | ||
| pass | ||
| ''' | ||
| since the model need about 2s to predict, the camera demo couldn't support now, I will try to update later | ||
WanliZhong marked this conversation as resolved.
Outdated
Show resolved
Hide resolved
|
||
| ''' | ||
| # # Camera input | ||
| # cap = cv.VideoCapture(0) | ||
|
|
||
| # while cv.waitKey(1) < 0: | ||
| # ret, frame = cap.read() | ||
| # if not ret: | ||
| # break | ||
|
|
||
| # # Preprocess and run the model on the frame | ||
| # blob = cv.dnn.blobFromImage(frame, size=(224, 224), mean=(123.675, 116.28, 103.53), swapRB=True, crop=False) | ||
| # model.setInput(blob) | ||
| # result = model.forward() | ||
|
|
||
| # # Visualize the results | ||
| # vis_frame = visualize(frame, result) | ||
| # cv.imshow('EfficientSAM Demo', vis_frame) | ||
|
|
||
| # # Release the camera | ||
| # cap.release() | ||
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,73 @@ | ||
| import numpy as np | ||
| import cv2 as cv | ||
|
|
||
| class EfficientSam: | ||
| def __init__(self, modelPath, backendId=0, targetId=0): | ||
| self._modelPath = modelPath | ||
| self._backendId = backendId | ||
| self._targetId = targetId | ||
|
|
||
| self._model = cv.dnn.readNet(self._modelPath) | ||
| self._model.setPreferableBackend(self._backendId) | ||
| self._model.setPreferableTarget(self._targetId) | ||
| # 3 inputs | ||
| self._inputNames = ["batched_images", "batched_point_coords", "batched_point_labels"] | ||
|
|
||
| self._outputNames = ['output_masks'] # actual output layer name | ||
| self._currentInputSize = None | ||
| self._inputSize = [640, 640] # input size for the model | ||
|
|
||
| @property | ||
| def name(self): | ||
| return self.__class__.__name__ | ||
|
|
||
| def setBackendAndTarget(self, backendId, targetId): | ||
| self._backendId = backendId | ||
| self._targetId = targetId | ||
| self._model.setPreferableBackend(self._backendId) | ||
| self._model.setPreferableTarget(self._targetId) | ||
|
|
||
| def _preprocess(self, image, points, lables): | ||
|
|
||
| image = cv.cvtColor(image, cv.COLOR_BGR2RGB) | ||
| # record the input image size, (width, height) | ||
| self._currentInputSize = (image.shape[1], image.shape[0]) | ||
|
|
||
| image = cv.resize(image, self._inputSize) | ||
|
|
||
| image = image.astype(np.float32, copy=False) / 255.0 | ||
|
|
||
| # convert points to (640*640) size space | ||
| for p in points: | ||
| p[0] = int(p[0] * self._inputSize[0]/self._currentInputSize[0]) | ||
| p[1] = int(p[1]* self._inputSize[1]/self._currentInputSize[1]) | ||
|
|
||
| image_blob = cv.dnn.blobFromImage(image) | ||
|
|
||
| points_blob = np.array([[points]], dtype=np.float32) | ||
|
|
||
| lables_blob = np.array([[[lables]]]) | ||
|
|
||
| return image_blob, points_blob, lables_blob | ||
|
|
||
| def infer(self, image, points, lables): | ||
| # Preprocess | ||
| imageBlob, pointsBlob, lablesBlob = self._preprocess(image, points, lables) | ||
| # Forward | ||
| self._model.setInput(imageBlob, self._inputNames[0]) | ||
| self._model.setInput(pointsBlob, self._inputNames[1]) | ||
| self._model.setInput(lablesBlob, self._inputNames[2]) | ||
| outputBlob = self._model.forward() | ||
| # Postprocess | ||
| results = self._postprocess(outputBlob) | ||
|
|
||
| return results | ||
|
|
||
| def _postprocess(self, outputBlob): | ||
| mask = outputBlob[0, 0, 0, :, :] >= 0 | ||
|
|
||
| mask_uint8 = (mask * 255).astype(np.uint8) | ||
| # change to real image size | ||
| mask_uint8 = cv.resize(mask_uint8, dsize=(self._currentInputSize[0], self._currentInputSize[1]), interpolation=2) | ||
|
|
||
| return mask_uint8 |
3 changes: 3 additions & 0 deletions
3
models/image_segmentation_efficientsam/example_outputs/example1.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
3 changes: 3 additions & 0 deletions
3
models/image_segmentation_efficientsam/example_outputs/example2.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
3 changes: 3 additions & 0 deletions
3
models/image_segmentation_efficientsam/examples/examples_image1.jpg
WanliZhong marked this conversation as resolved.
Outdated
Show resolved
Hide resolved
|
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
3 changes: 3 additions & 0 deletions
3
models/image_segmentation_efficientsam/examples/examples_image10.jpg
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
3 changes: 3 additions & 0 deletions
3
models/image_segmentation_efficientsam/examples/examples_image11.jpg
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
3 changes: 3 additions & 0 deletions
3
models/image_segmentation_efficientsam/examples/examples_image12.jpg
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
3 changes: 3 additions & 0 deletions
3
models/image_segmentation_efficientsam/examples/examples_image13.jpg
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
3 changes: 3 additions & 0 deletions
3
models/image_segmentation_efficientsam/examples/examples_image14.jpg
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
3 changes: 3 additions & 0 deletions
3
models/image_segmentation_efficientsam/examples/examples_image2.jpg
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
3 changes: 3 additions & 0 deletions
3
models/image_segmentation_efficientsam/examples/examples_image3.jpg
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
3 changes: 3 additions & 0 deletions
3
models/image_segmentation_efficientsam/examples/examples_image4.jpg
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
3 changes: 3 additions & 0 deletions
3
models/image_segmentation_efficientsam/examples/examples_image5.jpg
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
3 changes: 3 additions & 0 deletions
3
models/image_segmentation_efficientsam/examples/examples_image6.jpg
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
3 changes: 3 additions & 0 deletions
3
models/image_segmentation_efficientsam/examples/examples_image7.jpg
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
3 changes: 3 additions & 0 deletions
3
models/image_segmentation_efficientsam/examples/examples_image8.jpg
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
3 changes: 3 additions & 0 deletions
3
models/image_segmentation_efficientsam/examples/examples_image9.jpg
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
3 changes: 3 additions & 0 deletions
3
models/image_segmentation_efficientsam/image_segmentation_efficientsam_ti_2024may.onnx
Git LFS file not shown
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Uh oh!
There was an error while loading. Please reload this page.