Skip to content

Commit 8bc8e71

Browse files
committed
add documentation folder
1 parent b2c5ff0 commit 8bc8e71

File tree

4 files changed

+183
-0
lines changed

4 files changed

+183
-0
lines changed

docs/source/api_reference.rst

Lines changed: 32 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,32 @@
1+
API Reference
2+
=============
3+
.. toctree::
4+
:caption: API Reference
5+
6+
.. autofunction:: segmenteverygrain.predict_image
7+
.. autofunction:: segmenteverygrain.predict_large_image
8+
.. autofunction:: segmenteverygrain.predict_image_tile
9+
.. autofunction:: segmenteverygrain.label_grains
10+
.. autofunction:: segmenteverygrain.one_point_prompt
11+
.. autofunction:: segmenteverygrain.two_point_prompt
12+
.. autofunction:: segmenteverygrain.find_overlapping_polygons
13+
.. autofunction:: segmenteverygrain.weighted_crossentropy
14+
.. autofunction:: segmenteverygrain.plot_images_and_labels
15+
.. autofunction:: segmenteverygrain.calculate_iou
16+
.. autofunction:: segmenteverygrain.pick_most_similar_polygon
17+
.. autofunction:: segmenteverygrain.sam_segmentation
18+
.. autofunction:: segmenteverygrain.find_connected_components
19+
.. autofunction:: segmenteverygrain.merge_overlapping_polygons
20+
.. autofunction:: segmenteverygrain.rasterize_grains
21+
.. autofunction:: segmenteverygrain.create_labeled_image
22+
.. autofunction:: segmenteverygrain.load_and_preprocess
23+
.. autofunction:: segmenteverygrain.onclick
24+
.. autofunction:: segmenteverygrain.onpress
25+
.. autofunction:: segmenteverygrain.onclick2
26+
.. autofunction:: segmenteverygrain.onpress2
27+
.. autofunction:: segmenteverygrain.click_for_scale
28+
.. autofunction:: segmenteverygrain.get_grains_from_patches
29+
.. autofunction:: segmenteverygrain.plot_image_w_colorful_grains
30+
.. autofunction:: segmenteverygrain.plot_grain_axes_and_centroids
31+
.. autofunction:: segmenteverygrain.classify_points
32+
.. autofunction:: segmenteverygrain.compute_curvature

docs/source/conf.py

Lines changed: 35 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,35 @@
1+
# Configuration file for the Sphinx documentation builder.
2+
#
3+
# For the full list of built-in configuration values, see the documentation:
4+
# https://www.sphinx-doc.org/en/master/usage/configuration.html
5+
6+
# -- Project information -----------------------------------------------------
7+
# https://www.sphinx-doc.org/en/master/usage/configuration.html#project-information
8+
9+
project = 'segmenteverygrain'
10+
copyright = '2024, Zoltan Sylvester'
11+
author = 'Zoltan Sylvester'
12+
release = '0.1.8'
13+
14+
# -- General configuration ---------------------------------------------------
15+
# https://www.sphinx-doc.org/en/master/usage/configuration.html#general-configuration
16+
17+
extensions = [
18+
'sphinx.ext.autodoc',
19+
'sphinx.ext.napoleon',
20+
]
21+
22+
templates_path = ['_templates']
23+
exclude_patterns = []
24+
25+
26+
27+
# -- Options for HTML output -------------------------------------------------
28+
# https://www.sphinx-doc.org/en/master/usage/configuration.html#options-for-html-output
29+
30+
html_theme = 'alabaster'
31+
html_static_path = ['_static']
32+
33+
import os
34+
import sys
35+
sys.path.insert(0, '/Users/zoltan/Dropbox/Segmentation/segmenteverygrain/segmenteverygrain/__init__.py')

docs/source/getting_started.rst

Lines changed: 65 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,65 @@
1+
Getting started
2+
---------------
3+
4+
.. toctree::
5+
:caption: Getting started
6+
7+
To load the Unet model:
8+
9+
.. code-block:: python
10+
11+
import segmenteverygrain as seg
12+
model = seg.Unet()
13+
model.compile(optimizer=Adam(), loss=seg.weighted_crossentropy, metrics=["accuracy"])
14+
model.load_weights('./checkpoints/seg_model')
15+
16+
To run the Unet segmentation on an image and label the grains in the Unet output:
17+
18+
.. code-block:: python
19+
20+
image_pred = seg.predict_image(image, model, I=256)
21+
labels, coords = seg.label_grains(image, image_pred, dbs_max_dist=20.0)
22+
23+
The input image should not be much larger than ~2000x3000 pixels, in part to avoid long running times; it is supposed to be a numpy array with 3 channels (RGB).
24+
Grains should be well defined in the image and not too small (e.g., only a few pixels in size).
25+
The Unet prediction should be QC-d before running the SAM segmentation:
26+
27+
.. code-block:: python
28+
29+
plt.figure(figsize=(15,10))
30+
plt.imshow(big_im_pred)
31+
plt.scatter(np.array(coords)[:,0], np.array(coords)[:,1], c='k')
32+
plt.xticks([])
33+
plt.yticks([]);
34+
35+
If the Unet segmentation is of low quality, the base model can be (and should be) fine tuned using the ``Train_seg_unet_model.ipynb`` notebook.
36+
37+
To run the SAM segmentation on an image, using the outputs from the Unet model:
38+
39+
.. code-block:: python
40+
41+
all_grains, labels, mask_all, grain_data, fig, ax = seg.sam_segmentation(sam, image, image_pred, coords, labels, min_area=400.0, plot_image=True, remove_edge_grains=False, remove_large_objects=False)
42+
43+
The ``all_grains`` list contains shapely polygons of the grains detected in the image. ``labels`` is an image that contains the labels of the grains.
44+
``grain_data`` is a pandas dataframe with a number of grain parameters.
45+
46+
If you want to detect grains in large images, you should use the ``predict_large_image`` function, which will split the image into patches and run the Unet and SAM segmentations on each patch:
47+
48+
.. code-block:: python
49+
50+
all_grains = seg.predict_large_image(fname, model, sam, min_area=400.0, patch_size=2000, overlap=200)
51+
52+
Just like before, the ``all_grains`` list contains shapely polygons of the grains detected in the image. The image containing the grain labels can be generated like this:
53+
54+
.. code-block:: python
55+
56+
labels = seg.rasterize_grains(all_grains, large_image)
57+
58+
See the `Segment_every_grain.ipynb <https://github.com/zsylvester/segmenteverygrain/blob/main/segmenteverygrain/Segment_every_grain.ipynb>`_ notebook for an example
59+
of how the models can be loaded and used for segmenting an image and QC-ing the result. The notebook goes through the steps of loading the models, running the
60+
segmentation, interactively updating the result, and saving the grain data and the mask.
61+
62+
The `Train_seg_unet_model.ipynb <https://github.com/zsylvester/segmenteverygrain/blob/main/segmenteverygrain/Train_seg_unet_model.ipynb>`_ notebook goes through the
63+
steps needed to create, train, and test the Unet model. If the base Unet model does not work well on a specific type of image, it is a good idea to generate some
64+
new training data (a few small images are usually enough) and to fine tune the base model so that it works better on the new image type. The workflow in the
65+
'Train_seg_unet_model' notebook can be used to do this finetuning -- you just need to load the weights of the base model before starting the training.

docs/source/index.rst

Lines changed: 51 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,51 @@
1+
.. figure:: ../../gravel_example_mask.png
2+
:alt: grains detected in an image
3+
:align: left
4+
5+
segmenteverygrain
6+
=================
7+
`GitHub Repository <https://github.com/zsylvester/segmenteverygrain>`_
8+
9+
``segmenteverygrain`` is a Python package that aims to detect grains (or grain-like objects) in images.
10+
The goal is to develop a machine learning model that does a reasonably good job at detecting most of the grains in a photo, so that it is
11+
useful for determining grain size and grain shape, a common task in geomorphology and sedimentary geology. ``segmenteverygrain``
12+
relies on the `Segment Anything Model (SAM) <https://github.com/facebookresearch/segment-anything>`_, developed by Meta,
13+
for getting high-quality outlines of the grains. However, SAM requires prompts for every object detected and, when used in
14+
'everything' mode, it tends to be slow and results in many overlapping masks and non-grain (background) objects.
15+
To deal with these issues, 'segmenteverygrain' relies on a Unet-style, patch-based convolutional neural network to create a
16+
first-pass segmentation which is then used to generate prompts for the SAM-based segmentation. Some of the grains will be missed
17+
with this approach, but the segmentations that are created tend to be of high quality.
18+
19+
``segmenteverygrain`` also includes a set of functions that make it possible to clean up the segmentation results: delete and
20+
merge objects by clicking on them, and adding grains that were not segmented automatically. The QC-d masks can be saved and
21+
added to a dataset of grain images. These images then can be used to improve the Unet model.
22+
23+
Installation
24+
------------
25+
.. toctree::
26+
:caption: Installation
27+
28+
To install ``segmenteverygrain`` you can use ``pip``:
29+
30+
.. code-block:: shell
31+
32+
pip install segmenteverygrain
33+
34+
Or you can install it from the source code:
35+
36+
.. code-block:: shell
37+
38+
git clone https://github.com/zsylvester/segmenteverygrain.git
39+
cd segmenteverygrain
40+
pip install .
41+
42+
The easiest way of creating a Python environment in which 'segmenteverygrain' works well is to use
43+
the `environment.yml <https://github.com/zsylvester/segmenteverygrain/blob/main/environment.yml>`_ file with conda (or mamba).
44+
45+
Contents
46+
--------
47+
.. toctree::
48+
:maxdepth: 2
49+
50+
getting_started
51+
api_reference

0 commit comments

Comments
 (0)