Skip to content
Merged
Show file tree
Hide file tree
Changes from 2 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
21 changes: 21 additions & 0 deletions models/image_segmentation_efficientsam/LICENSE
Original file line number Diff line number Diff line change
@@ -0,0 +1,21 @@
MIT License

Copyright (c) 2024 Zhang-Yang-Sustech

Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:

The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.

THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
37 changes: 37 additions & 0 deletions models/image_segmentation_efficientsam/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,37 @@
# image_segmentation_efficientsam

EfficientSAM: Leveraged Masked Image Pretraining for Efficient Segment Anything

Notes:
-
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The original repo offers EfficientSAM-S and -Ti. Which one is used here? Add a note to describe including version (shasum, md5sum, etc.). Also describe how you convert the model to ONNX (a script would be nice).

Also need to describe how many clicks are required.


## Demo

### Python
Run the following command to try the demo:

```shell
python demo.py --input /path/to/image
```

### C++

## Result

Here are some of the sample results that were observed using the model:

![test1_res.jpg](./example_outputs/example1.png)
![test2_res.jpg](./example_outputs/example2.png)

## Model metrics:

## License

All files in this directory are licensed under [Apache 2.0 License](./LICENSE).

#### Contributor Details

## Reference

- https://arxiv.org/abs/2312.00863
- https://github.com/yformer/EfficientSAM
137 changes: 137 additions & 0 deletions models/image_segmentation_efficientsam/demo.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,137 @@
import argparse
import numpy as np
import cv2 as cv
from efficientSAM import EfficientSam

# Check OpenCV version
assert cv.__version__ >= "4.9.0", \
"Please install latest opencv-python to try this demo: python3 -m pip install --upgrade opencv-python"

parser = argparse.ArgumentParser(description='EfficientSAM Demo')
parser.add_argument('--input', '-i', type=str,
help='Set input path to a certain image, omit if using camera.')
parser.add_argument('--model', '-m', type=str, default='image_segmentation_efficientsam_ti_2024may.onnx',
help='Set model path, defaults to image_segmentation_efficientsam_ti_2024may.onnx.')
parser.add_argument('--save', '-s', action='store_true',
help='Specify to save a file with results. Invalid in case of camera input.')
args = parser.parse_args()

#global click listener
clicked_left = False
#global point record in the window
point = []

def visualize(image, result):
"""
Visualize the inference result on the input image.

Args:
image (np.ndarray): The input image.
result (np.ndarray): The inference result.

Returns:
vis_result (np.ndarray): The visualized result.
"""
# get image and mask
vis_result = np.copy(image)
mask = np.copy(result)
# change mask to binary image
t, binary = cv.threshold(mask, 127, 255, cv.THRESH_BINARY)
assert set(np.unique(binary)) <= {0, 255}, "The mask must be a binary image"
# enhance red channel to make the segmentation more obviously
enhancement_factor = 1.8
red_channel = vis_result[:, :, 2]
# update the channel
red_channel = np.where(binary == 255, np.minimum(red_channel * enhancement_factor, 255), red_channel)
vis_result[:, :, 2] = red_channel

# draw borders
contours, hierarchy = cv.findContours(binary, cv.RETR_LIST ,cv.CHAIN_APPROX_TC89_L1)
cv.drawContours(vis_result, contours, contourIdx = -1, color = (255,255,255), thickness=2, )
return vis_result

def select(event, x, y, flags, param):
global clicked_left
# When the left mouse button is pressed, record the coordinates of the point where it is pressed
if event == cv.EVENT_LBUTTONUP:
point.append([x,y])
print("point:",point[0])
clicked_left = True

if __name__ == '__main__':
# Load the EfficientSAM model
model = EfficientSam(modelPath=args.model)

if args.input is not None:
# Read image
image = cv.imread(args.input)
if image is None:
print('Could not open or find the image:', args.input)
exit(0)
# create window
image_window = "image: click on the thing whick you want to segment!"
cv.namedWindow(image_window, cv.WINDOW_NORMAL)
# change window size
cv.resizeWindow(image_window, 800 if image.shape[0] > 800 else image.shape[0], 600 if image.shape[1] > 600 else image.shape[1])
# put the window on the left of the screen
cv.moveWindow(image_window, 50, 100)
# set listener to record user's click point
cv.setMouseCallback(image_window, select)
# tips in the terminal
print("click the picture on the LEFT and see the result on the RIGHT!")
# show image
cv.imshow(image_window, image)
# waiting for click
while cv.waitKey(1) == -1 or clicked_left:
# receive click
if clicked_left:
# put the click point (x,y) into the model to predict
result = model.infer(image=image, points=point, lables=[1])
# get the visualized result
vis_result = visualize(image, result)
# create window to show visualized result
cv.namedWindow("vis_result", cv.WINDOW_NORMAL)
cv.resizeWindow("vis_result", 800 if vis_result.shape[0] > 800 else vis_result.shape[0], 600 if vis_result.shape[1] > 600 else vis_result.shape[1])
cv.moveWindow("vis_result", 851, 100)
cv.imshow("vis_result", vis_result)
# set click false to listen another click
clicked_left = False
elif cv.getWindowProperty(image_window, cv.WND_PROP_VISIBLE) < 1:
# if click × to close the image window then ending
break
else:
# when not clicked, set point to empty
point = []
cv.destroyAllWindows()

# Save results if save is true
if args.save:
cv.imwrite('./example_outputs/vis_result.jpg', vis_result)
cv.imwrite("./example_outputs/mask.jpg", result)
print('vis_result.jpg and mask.jpg are saved to ./example_outputs/')


else:
pass
'''
since the model need about 2s to predict, the camera demo couldn't support now, I will try to update later
'''
# # Camera input
# cap = cv.VideoCapture(0)

# while cv.waitKey(1) < 0:
# ret, frame = cap.read()
# if not ret:
# break

# # Preprocess and run the model on the frame
# blob = cv.dnn.blobFromImage(frame, size=(224, 224), mean=(123.675, 116.28, 103.53), swapRB=True, crop=False)
# model.setInput(blob)
# result = model.forward()

# # Visualize the results
# vis_frame = visualize(frame, result)
# cv.imshow('EfficientSAM Demo', vis_frame)

# # Release the camera
# cap.release()
73 changes: 73 additions & 0 deletions models/image_segmentation_efficientsam/efficientSAM.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,73 @@
import numpy as np
import cv2 as cv

class EfficientSam:
def __init__(self, modelPath, backendId=0, targetId=0):
self._modelPath = modelPath
self._backendId = backendId
self._targetId = targetId

self._model = cv.dnn.readNet(self._modelPath)
self._model.setPreferableBackend(self._backendId)
self._model.setPreferableTarget(self._targetId)
# 3 inputs
self._inputNames = ["batched_images", "batched_point_coords", "batched_point_labels"]

self._outputNames = ['output_masks'] # actual output layer name
self._currentInputSize = None
self._inputSize = [640, 640] # input size for the model

@property
def name(self):
return self.__class__.__name__

def setBackendAndTarget(self, backendId, targetId):
self._backendId = backendId
self._targetId = targetId
self._model.setPreferableBackend(self._backendId)
self._model.setPreferableTarget(self._targetId)

def _preprocess(self, image, points, lables):

image = cv.cvtColor(image, cv.COLOR_BGR2RGB)
# record the input image size, (width, height)
self._currentInputSize = (image.shape[1], image.shape[0])

image = cv.resize(image, self._inputSize)

image = image.astype(np.float32, copy=False) / 255.0

# convert points to (640*640) size space
for p in points:
p[0] = int(p[0] * self._inputSize[0]/self._currentInputSize[0])
p[1] = int(p[1]* self._inputSize[1]/self._currentInputSize[1])

image_blob = cv.dnn.blobFromImage(image)

points_blob = np.array([[points]], dtype=np.float32)

lables_blob = np.array([[[lables]]])

return image_blob, points_blob, lables_blob

def infer(self, image, points, lables):
# Preprocess
imageBlob, pointsBlob, lablesBlob = self._preprocess(image, points, lables)
# Forward
self._model.setInput(imageBlob, self._inputNames[0])
self._model.setInput(pointsBlob, self._inputNames[1])
self._model.setInput(lablesBlob, self._inputNames[2])
outputBlob = self._model.forward()
# Postprocess
results = self._postprocess(outputBlob)

return results

def _postprocess(self, outputBlob):
mask = outputBlob[0, 0, 0, :, :] >= 0

mask_uint8 = (mask * 255).astype(np.uint8)
# change to real image size
mask_uint8 = cv.resize(mask_uint8, dsize=(self._currentInputSize[0], self._currentInputSize[1]), interpolation=2)

return mask_uint8
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Git LFS file not shown