Skip to content
Draft
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
67 changes: 11 additions & 56 deletions .github/workflows/docs.yaml
Original file line number Diff line number Diff line change
@@ -1,19 +1,21 @@
name: Build Docs for the latest
# This workflow is purely for validating the docs build.
name: Build Docs Validation

on:
workflow_dispatch: # run on request (no need for PR)
push:
branches:
- develop
pull_request:
paths:
- "docs/**"
- "src/**"
- "pyproject.toml"
- ".readthedocs.yaml"

# Declare default permissions as read only.
permissions: read-all

jobs:
Build-Docs:
runs-on: ubuntu-24.04
permissions:
contents: write
steps:
- name: Checkout repository
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
Expand All @@ -26,54 +28,7 @@ jobs:
pip install '.[ci_tox]'
- name: Build-Docs
run: tox -e build-doc
- name: Create gh-pages branch
- name: Check docs build
run: |
if [[ ${{github.event_name}} == 'workflow_dispatch' ]]; then
echo RELEASE_VERSION="test_build" >> $GITHUB_ENV
else
echo RELEASE_VERSION=${GITHUB_REF#refs/*/} >> $GITHUB_ENV
fi
echo SOURCE_NAME=${GITHUB_REF#refs/*/} >> $GITHUB_OUTPUT
echo SOURCE_BRANCH=${GITHUB_REF#refs/heads/} >> $GITHUB_OUTPUT
echo SOURCE_TAG=${GITHUB_REF#refs/tags/} >> $GITHUB_OUTPUT

existed_in_remote=$(git ls-remote --heads origin gh-pages)

if [[ -z ${existed_in_remote} ]]; then
echo "Creating gh-pages branch"
git config --local user.email "[email protected]"
git config --local user.name "GitHub Action"
git checkout --orphan gh-pages
git reset --hard
touch .nojekyll
git add .nojekyll
git commit -m "Initializing gh-pages branch"
git push origin gh-pages
git checkout ${{steps.branch_name.outputs.SOURCE_NAME}}
echo "Created gh-pages branch"
else
echo "Branch gh-pages already exists"
fi
- name: Commit docs to gh-pages branch
run: |
git fetch
git checkout gh-pages
mkdir -p /tmp/docs_build
cp -r docs/build/html/* /tmp/docs_build/
rm -rf ${{ env.RELEASE_VERSION }}/*
echo '<html><head><meta http-equiv="refresh" content="0; url=stable/" /></head></html>' > index.html
mkdir -p ${{ env.RELEASE_VERSION }}
cp -r /tmp/docs_build/* ./${{ env.RELEASE_VERSION }}
rm -rf /tmp/docs_build
git config --local user.email "[email protected]"
git config --local user.name "GitHub Action"
if [[ ${{ env.RELEASE_VERSION }} != 'test_build' ]]; then
ln -sfn ${{ env.RELEASE_VERSION }} latest
fi
git add ./latest ${{ env.RELEASE_VERSION }}
git commit -m "Update documentation" -a || true
- name: Push changes
uses: ad-m/github-push-action@fcea09907c44d7a7a3331c9c04080d55d87c95fe # master
with:
github_token: ${{ secrets.GITHUB_TOKEN }}
branch: gh-pages
echo "Documentation built successfully!"
echo "Check docs/build/html/index.html for local preview"
24 changes: 24 additions & 0 deletions .readthedocs.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,24 @@
# ReadTheDocs configuration file
# See https://docs.readthedocs.io/en/stable/config-file/v2.html for details

version: 2

# Set the OS, Python version and other tools you might need
build:
os: ubuntu-22.04
tools:
python: "3.12"

# Build documentation in the "docs/" directory with Sphinx
sphinx:
configuration: docs/source/conf.py

# Declare the Python requirements required to build your documentation
python:
install:
- method: pip
path: .
extra_requirements:
- docs
# Support for multiple versions (this is one of the main benefits of ReadTheDocs)
# You can configure this later in the ReadTheDocs dashboard
Binary file added docs/source/_static/logos/geti-favicon-64.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
4 changes: 0 additions & 4 deletions docs/source/_static/redirects/guide-homepage-redirect.html

This file was deleted.

62 changes: 12 additions & 50 deletions docs/source/conf.py
Original file line number Diff line number Diff line change
@@ -1,15 +1,14 @@
# Configuration file for the Sphinx documentation builder.
#
# This file only contains a selection of the most common options. For a full
# list see the documentation:
# https://www.sphinx-doc.org/en/master/usage/configuration.html
"""Configuration file for the Sphinx documentation builder.

# -- Path setup -------------------------------------------------------------- #
For the full list of built-in configuration values, see the documentation:
https://www.sphinx-doc.org/en/master/usage/configuration.html

# If extensions (or modules to document with autodoc) are in another directory,
# add these directories to sys.path here. If the directory is relative to the
# documentation root, use os.path.abspath to make it absolute, like shown here.
#
-- Project information -----------------------------------------------------
https://www.sphinx-doc.org/en/master/usage/configuration.html#project-information
"""

# Copyright (C) 2022-2025 Intel Corporation
# SPDX-License-Identifier: Apache-2.0

import os
import sys
Expand Down Expand Up @@ -44,7 +43,6 @@
"sphinx_design",
"myst_parser", # Enhanced markdown support
"sphinx.ext.todo", # Support for TODO items
"sphinx.ext.githubpages", # GitHub Pages support
"sphinx.ext.coverage", # Documentation coverage check
]

Expand All @@ -58,11 +56,6 @@
"tasklist",
]

source_suffix = {
".rst": "restructuredtext",
".md": "markdown",
}

suppress_warnings = [
"ref.python",
"autosectionlabel.*",
Expand All @@ -80,37 +73,10 @@
# Add any paths that contain custom static files (such as style sheets) here,
# relative to this directory. They are copied after the builtin static files,
# so a file named "default.css" will overwrite the builtin "default.css".
html_theme = "pydata_sphinx_theme"
html_theme = "sphinx_book_theme"
html_static_path = ["_static"]

# Show source link & copyright
html_show_sourcelink = True
html_show_sphinx = False
html_show_copyright = True
html_copy_source = True


html_theme_options = {
"navbar_center": [],
"navbar_end": ["search-field.html", "theme-switcher.html", "navbar-icon-links.html"],
"search_bar_text": "Search",
"logo": {
"image_light": "logos/otx-logo.png",
"image_dark": "logos/otx-logo.png",
},
"icon_links": [
{
"name": "GitHub",
"url": "https://github.com/open-edge-platform/training_extensions",
"icon": "fab fa-github",
"type": "fontawesome",
},
],
"use_edit_page_button": True,
"show_nav_level": 3,
"navigation_depth": 6,
"show_toc_level": 3,
}
html_logo = "_static/logos/otx-logo.png"
html_favicon = "_static/logos/geti-favicon-64.png"

html_context = {
"github_user": "open-edge-platform",
Expand All @@ -119,10 +85,6 @@
"doc_path": "docs/source/",
}

html_css_files = [
"css/custom.css",
]

# -- Extension configuration -------------------------------------------------
autodoc_docstring_signature = True
autodoc_member_order = "bysource"
Expand Down
23 changes: 12 additions & 11 deletions docs/source/guide/explanation/algorithms/anomaly/index.rst
Original file line number Diff line number Diff line change
Expand Up @@ -77,15 +77,15 @@ Models
******
As mentioned above, the goal of visual anomaly detection is to learn a representation of normal behaviour in the data and then identify instances that deviate from this normal behaviour. OpenVINO Training Extensions supports several deep learning approaches to this task, including the following:

+--------+-------------------------------------------------------------------------------------------------------------------+----------------------+-----------------+
| Name | Recipe | Complexity (GFLOPs) | Model size (MB) |
+========+===================================================================================================================+======================+=================+
+--------+----------------------------------------------------------------------------------------------------------------------+----------------------+-----------------+
| Name | Recipe | Complexity (GFLOPs) | Model size (MB) |
+========+======================================================================================================================+======================+=================+
| PADIM | `padim <https://github.com/open-edge-platform/training_extensions/blob/develop/src/otx/recipe/anomaly_/padim.yaml>`_ | 3.9 | 168.4 |
+--------+-------------------------------------------------------------------------------------------------------------------+----------------------+-----------------+
+--------+----------------------------------------------------------------------------------------------------------------------+----------------------+-----------------+
| STFPM | `stfpm <https://github.com/open-edge-platform/training_extensions/blob/develop/src/otx/recipe/anomaly_/stfpm.yaml>`_ | 5.6 | 21.1 |
+--------+-------------------------------------------------------------------------------------------------------------------+----------------------+-----------------+
+--------+----------------------------------------------------------------------------------------------------------------------+----------------------+-----------------+
| U-Flow | `uflow <https://github.com/open-edge-platform/training_extensions/blob/develop/src/otx/recipe/anomaly_/uflow.yaml>`_ | 59.6 | 62.88 |
+--------+-------------------------------------------------------------------------------------------------------------------+----------------------+-----------------+
+--------+----------------------------------------------------------------------------------------------------------------------+----------------------+-----------------+


Clustering-based Models
Expand Down Expand Up @@ -161,12 +161,13 @@ Normalizing Flow Models
Normalizing Flow models use invertible neural networks to transform image features into a simpler distribution, like a Gaussian. During inference, the Flow network is used to compute the likelihood of the input image under the learned distribution, assigning low probabilities to anomalous samples. OpenVINO Training Extensions currently supports `U-Flow: Unsupervised Anomaly Detection via Normalizing Flow <https://arxiv.org/pdf/2103.04257.pdf>`_.

U-Flow
^^^^^
^^^^^^

.. figure:: ../../../../../utils/images/uflow.png
:width: 600
:align: center
:alt: Anomaly Task Types
.. # TODO: Add U-Flow figure when uflow.png image is available
.. # figure:: ../../../../../utils/images/uflow.png
.. # :width: 600
.. # :align: center
.. # :alt: U-Flow Architecture

U-Flow consists of four stages.

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -55,25 +55,25 @@ Models

We support the following ready-to-use model recipes:

+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+---------------------+-----------------+
| Model Name | Complexity (GFLOPs) | Model params (M)|
+==================================================================================================================================================================================================================+=====================+=================+
| `MobileNet-V3-large <https://github.com/open-edge-platform/training_extensions/blob/develop/src/otx/recipe/classification/multi_class_cls/mobilenet_v3_large.yaml>`_ | 0.86 | 2.97 |
+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+---------------------+-----------------+
| `MobileNet-V3-small <https://github.com/open-edge-platform/training_extensions/blob/develop/src/otx/recipe/classification/multi_class_cls/tv_mobilenet_v3_small.yaml>`_ | 0.22 | 0.93 |
+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+---------------------+-----------------+
| `EfficinetNet-B0 <https://github.com/open-edge-platform/training_extensions/blob/develop/src/otx/recipe/classification/multi_class_cls/efficientnet_b0.yaml>`_ | 1.52 | 4.09 |
+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+---------------------+-----------------+
| `EfficientNet-B3 <https://github.com/open-edge-platform/training_extensions/blob/develop/src/otx/recipe/classification/multi_class_cls/tv_efficientnet_b3.yaml>`_ | 3.84 | 10.70 |
+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+---------------------+-----------------+
| `EfficientNet-V2-S <https://github.com/open-edge-platform/training_extensions/blob/develop/src/otx/recipe/classification/multi_class_cls/efficientnet_v2.yaml>`_ | 5.76 | 20.23 |
+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+---------------------+-----------------+
| `EfficientNet-V2-l <https://github.com/open-edge-platform/training_extensions/blob/develop/src/otx/recipe/classification/multi_class_cls/tv_efficientnet_v2_l.yaml>`_ | 48.92 | 117.23 |
+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+---------------------+-----------------+
| `DeiT-Tiny <https://github.com/open-edge-platform/training_extensions/blob/develop/src/otx/recipe/classification/multi_class_cls/deit_tiny.yaml>`_ | 2.51 | 22.0 |
+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+---------------------+-----------------+
| `DINO-V2 <https://github.com/open-edge-platform/training_extensions/blob/develop/src/otx/recipe/classification/multi_class_cls/dino_v2.yaml>`_ | 12.46 | 88.0 |
+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+---------------------+-----------------+
+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------+-------------------+
| Model Name | Complexity (GFLOPs) | Model params (M) |
+=============================================================================================================================================================================+======================+===================+
| `MobileNet-V3-large <https://github.com/open-edge-platform/training_extensions/blob/develop/src/otx/recipe/classification/multi_class_cls/mobilenet_v3_large.yaml>`_ | 0.86 | 2.97 |
+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------+-------------------+
| `MobileNet-V3-small <https://github.com/open-edge-platform/training_extensions/blob/develop/src/otx/recipe/classification/multi_class_cls/tv_mobilenet_v3_small.yaml>`_ | 0.22 | 0.93 |
+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------+-------------------+
| `EfficinetNet-B0 <https://github.com/open-edge-platform/training_extensions/blob/develop/src/otx/recipe/classification/multi_class_cls/efficientnet_b0.yaml>`_ | 1.52 | 4.09 |
+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------+-------------------+
| `EfficientNet-B3 <https://github.com/open-edge-platform/training_extensions/blob/develop/src/otx/recipe/classification/multi_class_cls/tv_efficientnet_b3.yaml>`_ | 3.84 | 10.70 |
+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------+-------------------+
| `EfficientNet-V2-S <https://github.com/open-edge-platform/training_extensions/blob/develop/src/otx/recipe/classification/multi_class_cls/efficientnet_v2.yaml>`_ | 5.76 | 20.23 |
+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------+-------------------+
| `EfficientNet-V2-l <https://github.com/open-edge-platform/training_extensions/blob/develop/src/otx/recipe/classification/multi_class_cls/tv_efficientnet_v2_l.yaml>`_ | 48.92 | 117.23 |
+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------+-------------------+
| `DeiT-Tiny <https://github.com/open-edge-platform/training_extensions/blob/develop/src/otx/recipe/classification/multi_class_cls/deit_tiny.yaml>`_ | 2.51 | 22.0 |
+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------+-------------------+
| `DINO-V2 <https://github.com/open-edge-platform/training_extensions/blob/develop/src/otx/recipe/classification/multi_class_cls/dino_v2.yaml>`_ | 12.46 | 88.0 |
+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------+-------------------+

`MobileNet-V3 <https://arxiv.org/abs/1905.02244>`_ is the best choice when training time and computational cost are in priority, nevertheless, this recipe provides competitive accuracy as well.
`EfficientNet-B0/B3 <https://arxiv.org/abs/1905.11946>`_ consumes more Flops compared to MobileNet, providing better performance on large datasets.
Expand Down
Loading