Skip to content

Multilabel DeepEdit

Andres Diaz-Pinto edited this page Nov 10, 2021 · 12 revisions

Multilabel DeepEdit generalizes the DeepEdit App. This means it addresses the single and multilabel segmentation tasks. Similar to the DeepEdit, this App combines the power of two models in one single architecture: automatic inference, as a standard segmentation method (i.e. UNet), and interactive segmentation using clicks.

Training schema:

The training process of a DeepEdit App involves a combination of simulated clicks and standard training. As shown in the next figure, the input of the network is a concatenation of three tensors: image, positive (foreground) and negative (background) points or clicks. This model has two types of training: For some iterations, tensors representing the foreground and background points are zeros and for other iterations, positive and negative clicks are simulated so the model can receive inputs for interactive segmentation. For the click simulation, users can take advantage of the already developed transforms and engines in MONAI.

DeepEdit Schema for Training

Inference schema:

As mentioned before, DeepEdit combined the power of a standard and an interactive segmentation algorithm in one single model. In the following schema, it is possible to see the two types of inputs this model can process: Standard segmentation (foreground and background points are zero) and interactive segmentation where the foreground and background points are represented in tensors.

DeepEdit Schema for Testing

Clone this wiki locally