Skip to content

Commit e214460

Browse files
Watershed algorithm (#222)
1 parent 5c509cc commit e214460

File tree

6 files changed

+91
-2
lines changed

6 files changed

+91
-2
lines changed

Project.toml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -39,7 +39,7 @@ AxisArrays = "0"
3939
ColorTypes = "0"
4040
Colors = "0"
4141
CoordinateTransformations = "0"
42-
DemoCards = "0.4"
42+
DemoCards = "0.4.4"
4343
Distances = "0"
4444
Documenter = "0.27"
4545
FileIO = "1"

docs/examples/color_channels/indexed_image.jl

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -2,6 +2,7 @@
22
# cover: assets/indexed_image.png
33
# title: Indexed image in 5 minutes
44
# author: Johnny Chen
5+
# id: demo_indexed_image
56
# date: 2020-11-23
67
# ---
78

@@ -67,7 +68,7 @@ indexed_img = IndirectArray(indices, palatte)
6768
img == indexed_img
6869

6970
# Under the hook, it is just a simple struct that subtypes `AbstractArray`:
70-
#
71+
#
7172
# ```julia
7273
# # no need to run this
7374
# struct IndirectArray{T,N,A,V} <: AbstractArray{T,N}
178 KB
Loading
51.8 KB
Loading
51.3 KB
Loading
Lines changed: 88 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,88 @@
1+
# ---
2+
# cover: assets/watershed.gif
3+
# title: Watershed Segmentation Algorithm
4+
# description: This demo shows how to use the watershed algorithm to segment an image.
5+
# author: Ashwani Rathee
6+
# date: 2021-08-16
7+
# ---
8+
9+
# In this demonstration, we will segment an image using the watershed algorithm and learn
10+
# how it segments those images.
11+
# We will using ImageSegmentation.jl which provides implementation of
12+
# several image segmentation algorithms.
13+
14+
using Images
15+
using ImageSegmentation, TestImages
16+
using IndirectArrays
17+
18+
img = testimage("blobs")
19+
img_example = zeros(Gray, 5, 5)
20+
img_example[2:4,2:4] .= Gray(0.6)
21+
bw = Gray.(img) .> 0.5
22+
bw_example = img_example .> 0.5
23+
24+
#
25+
26+
bw_transform = feature_transform(bw)
27+
bw_transform_example = feature_transform(bw_example)
28+
29+
# `feature_transform` allows us to find feature transform of a binary image(`bw`)
30+
# , it finds the closest "feature" (positions where `bw` is `true`) for each location in
31+
# `bw`. Specifically, `F[i]` is a `CartesianIndex` encoding the position
32+
# closest to `i` for which `bw[F[i]]` is `true`. In cases where two or
33+
# more features in `bw` have the same distance from `i`, an arbitrary
34+
# feature is chosen. If `bw` has no `true` values, then all locations are
35+
# mapped to an index where each coordinate is `typemin(Int)`.
36+
37+
# For example, the closest `true` to `bw_example[1,1]` exists at `CartesianIndex(2, 2)`,
38+
# hence it's assigned `CartesianIndex(2, 2)`. For other positions in `bw_example` it is
39+
# processed similarily.
40+
41+
dist = 1 .- distance_transform(bw_transform)
42+
dist_example = 1 .- distance_transform(bw_transform_example)
43+
44+
# | Dist(distance tranform for img) | Dist(distance transform for img_example) |
45+
# | :---:| :-----------:|
46+
# |![](assets/contour1.png) | ![](assets/dist_example.png) |
47+
48+
# `distance transform` of `bw_transform` where each element in the array
49+
# each element `F[i]` represents a "target" or "feature" location assigned to `i`.
50+
# Specifically, `D[i]` is the distance between `i` and `F[i]`.
51+
# Optionally specify the weight `w` assigned to each coordinate; the
52+
# default value of `nothing` is equivalent to `w=(1,1,...)`.
53+
54+
# In `bw_transform`, element at [1,1] has `CartesianIndex(2, 2)` in its place and `D[i]` for this will be
55+
# distance between `CartesianIndex(1, 1)` and `CartesianIndex(2, 2)` which is `sqrt(2)`.
56+
57+
dist_trans = dist .< 1
58+
markers = label_components(dist_trans)
59+
markers_example = label_components(dist_example .< 0.5)
60+
Gray.(markers/32.0) # each of the blobs is slightly differently marked by label_components from 1 to 64
61+
62+
63+
# `label_components` finds the connected components in a binary array `dist_trans`.
64+
# You can provide a list indicating which dimensions are used to determine
65+
# connectivity. For example, `region = [1,3]` would not test neighbors along
66+
# dimension 2 for connectivity. This corresponds to just the nearest neighbors,
67+
# i.e., 4-connectivity in 2d and 6-connectivity in 3d. The default is `region = 1:ndims(A)`.
68+
# The output `label` is an integer array, where 0 is used for background
69+
# pixels, and each connected region gets a different integer index.
70+
71+
segments = watershed(dist, markers)
72+
segments_example = watershed(dist_example , markers_example)
73+
74+
# `watershed` method segments the image using watershed transform. Each basin formed
75+
# by watershed transform corresponds to a segment. To get segments we provide `dist`
76+
# and `markers` with each region's marker assigned a index starting from 1. Zero means
77+
# not a marker. If two markers have the same index, their regions will be merged into
78+
# a single region.
79+
80+
labels = labels_map(segments)
81+
colored_labels = IndirectArray(labels, distinguishable_colors(maximum(labels)))
82+
masked_colored_labels = colored_labels .* (1 .- bw)
83+
mosaic(img, colored_labels, masked_colored_labels; nrow=1)
84+
85+
# Here we use `IndirectArray` to store the indexed image, for more explaination on it please
86+
# check the tutorial [Indexed image in 5 minutes](@ref demo_indexed_image).
87+
88+
save("assets/watershed.gif", cat(img, colored_labels, masked_colored_labels; dims=3); fps=1) #src

0 commit comments

Comments
 (0)