-
Notifications
You must be signed in to change notification settings - Fork 251
Multilabel DeepEdit
Multilabel DeepEdit generalizes the DeepEdit App. This means it addresses the single and multilabel segmentation tasks. Similar to the DeepEdit, this App combines the power of two models in one single architecture: automatic inference, as a standard segmentation method (i.e. UNet), and interactive segmentation using clicks.
The training process of a DeepEdit App involves a combination of simulated clicks and standard training. As shown in the next figure, the input of the network is a concatenation of three tensors: image, positive (foreground) and negative (background) points or clicks. This model has two types of training: For some iterations, tensors representing the foreground and background points are zeros and for other iterations, positive and negative clicks are simulated so the model can receive inputs for interactive segmentation. For the click simulation, users can take advantage of the already developed transforms and engines in MONAI.

As mentioned before, DeepEdit combined the power of a standard and an interactive segmentation algorithm in one single model. In the following schema, it is possible to see the two types of inputs this model can process: Standard segmentation (foreground and background points are zero) and interactive segmentation where the foreground and background points are represented in tensors.
