Skip to content

Commit f553c04

Browse files
iChizer0nullptrPillar1989
authored
docs: bug fixes, update docs (#271)
* feat: lazy import and auto install * chore: add docs module * fix: dummy input shape and import error * chore: sync quantizer from upstream * chore: restore vitepress pkgs * refactor: lazy import support arch spec * chore: sync latest buggy qta script * fix: auto adjust img size based on model's input shape * fix: update class meta from real datasets * fix: import errors, auto install missing deps * fix: add missing quan switch hook * fix: wrong val worker numbers * fix: custom datasets path, rtmdet qat config * docs: update docs * chore: ignore .npy file * chore: cleanup * fix: file name typo * docs: allow remote access by default * fix: workdir, input quantize sta, stale configs * fix: workdir * fix: sync batch norm breaks qat export * docs: update contents --------- Co-authored-by: nullptr <nullptr@localhost> Co-authored-by: Baozhu Zuo <[email protected]>
1 parent c1643db commit f553c04

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

72 files changed

+8111
-914
lines changed

.gitignore

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -58,3 +58,6 @@ docs/.vitepress/dist
5858
examples/
5959
work_dir/
6060
data/
61+
62+
# numpy
63+
*.npy

configs/datasets/coco_detection.py

Lines changed: 10 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -13,6 +13,10 @@
1313
# dataset settings
1414
dataset_type = CocoDataset
1515
data_root = "datasets/coco/"
16+
train_ann_file = "annotations/instances_train2017.json"
17+
val_ann_file = "annotations/instances_val2017.json"
18+
train_img_prefix = "train2017/"
19+
val_img_prefix = "val2017/"
1620

1721
# Example to use different file client
1822
# Method 1: simply set the data root and let the file I/O module
@@ -68,8 +72,8 @@
6872
dataset=dict(
6973
type=dataset_type,
7074
data_root=data_root,
71-
ann_file="annotations/instances_train2017.json",
72-
data_prefix=dict(img="train2017/"),
75+
ann_file=train_ann_file,
76+
data_prefix=dict(img=train_img_prefix),
7377
filter_cfg=dict(filter_empty_gt=True, min_size=32),
7478
pipeline=train_pipeline,
7579
),
@@ -84,8 +88,8 @@
8488
dataset=dict(
8589
type=dataset_type,
8690
data_root=data_root,
87-
ann_file="annotations/instances_val2017.json",
88-
data_prefix=dict(img="val2017/"),
91+
ann_file=val_ann_file,
92+
data_prefix=dict(img=val_img_prefix),
8993
test_mode=True,
9094
pipeline=test_pipeline,
9195
# batch_shapes_cfg=batch_shapes_cfg,
@@ -95,10 +99,11 @@
9599

96100
val_evaluator = dict(
97101
type=CocoMetric,
98-
ann_file=data_root + "annotations/instances_val2017.json",
102+
ann_file=data_root + val_ann_file,
99103
metric="bbox",
100104
format_only=False,
101105
backend_args=backend_args,
106+
sort_categories=True
102107
)
103108
test_evaluator = val_evaluator
104109

configs/rtmdet_l_8xb32_300e_coco.py

Lines changed: 6 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -7,7 +7,7 @@
77
from .datasets.coco_detection import *
88

99
from torchvision.ops import nms
10-
from torch.nn import SiLU, ReLU6, SyncBatchNorm, ReLU
10+
from torch.nn import SiLU, ReLU6, BatchNorm2d, ReLU
1111
from torch.optim.adamw import AdamW
1212

1313
from mmengine.hooks import EMAHook
@@ -77,7 +77,7 @@
7777
deepen_factor=d_factor,
7878
widen_factor=w_factor,
7979
channel_attention=False,
80-
norm_cfg=dict(type=SyncBatchNorm),
80+
norm_cfg=dict(type=BatchNorm2d),
8181
act_cfg=dict(type=ReLU, inplace=True),
8282
),
8383
neck=dict(
@@ -88,18 +88,18 @@
8888
out_channels=256,
8989
num_csp_blocks=1,
9090
expand_ratio=0.5,
91-
norm_cfg=dict(type=SyncBatchNorm),
91+
norm_cfg=dict(type=BatchNorm2d),
9292
act_cfg=dict(type=ReLU, inplace=True),
9393
),
9494
bbox_head=dict(
9595
type=RTMDetHead,
9696
head_module=dict(
9797
type=RTMDetSepBNHeadModule,
98-
num_classes=80,
98+
num_classes=num_classes,
9999
in_channels=256,
100100
stacked_convs=2,
101101
feat_channels=256,
102-
norm_cfg=dict(type=SyncBatchNorm),
102+
norm_cfg=dict(type=BatchNorm2d),
103103
act_cfg=dict(type=ReLU, inplace=True),
104104
share_conv=True,
105105
pred_kernel_size=1,
@@ -133,6 +133,7 @@
133133
),
134134
)
135135

136+
136137
deploy = dict(
137138
type=RTMDetInfer,
138139
data_preprocessor=dict(

configs/rtmdet_nano_8xb32_300e_coco.py

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -22,6 +22,7 @@
2222
from sscma.models import ExpMomentumEMA
2323
from sscma.quantizer import RtmdetQuantModel
2424

25+
2526
d_factor = 0.33
2627
w_factor = 0.25
2728

@@ -48,14 +49,14 @@
4849
)
4950
)
5051

52+
5153
model["bbox_head"].update(train_cfg=model["train_cfg"])
5254
model["bbox_head"].update(test_cfg=model["test_cfg"])
5355
quantizer_config = dict(
5456
type=RtmdetQuantModel,
5557
bbox_head=model["bbox_head"],
5658
data_preprocessor=model["data_preprocessor"], # data_preprocessor,
5759
)
58-
5960
train_pipeline = [
6061
dict(
6162
type=LoadImageFromFile,

docs/.vitepress/config.ts

Lines changed: 5 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1,9 +1,12 @@
11
import { defineConfig } from 'vitepress'
22

3+
import { withMermaid } from 'vitepress-plugin-mermaid'
4+
35
import en_US from './locales/en_US'
46
import zh_CN from './locales/zh_CN'
57

6-
export default defineConfig({
8+
9+
export default withMermaid(defineConfig({
710
base: '/',
811
title: 'SSCMA',
912
lastUpdated: true,
@@ -46,4 +49,4 @@ export default defineConfig({
4649
{ icon: 'github', link: 'https://github.com/Seeed-Studio/ModelAssistant' }
4750
]
4851
}
49-
})
52+
}))

docs/.vitepress/locales/en_US.ts

Lines changed: 57 additions & 15 deletions
Original file line numberDiff line numberDiff line change
@@ -2,20 +2,25 @@ import { defineConfig } from 'vitepress'
22

33
export default defineConfig({
44
lang: 'en-US',
5-
description: 'SSCMA is an open-source project focused on embedded AI.',
5+
description: '*SSCMA* is an open-source project focused on embedded artificial intelligence.',
66

77
themeConfig: {
88
nav: nav(),
99
sidebar: { '/': sidebar() },
1010

11+
darkModeSwitchLabel: 'Toggle Appearance',
12+
outlineTitle: 'Page Outline',
13+
lastUpdatedText: 'Last Updated',
14+
returnToTopLabel: 'Back to Top',
15+
1116
editLink: {
1217
pattern: 'https://github.com/Seeed-Studio/ModelAssistant/edit/main/docs/:path',
13-
text: 'Suggest changes to this page'
18+
text: 'Suggest edits for this page'
1419
},
1520

1621
footer: {
17-
message: 'Released under the Apache 2.0 License',
18-
copyright: 'Copyright © 2023-Present Seeed Studio & SSCMA Contributors',
22+
message: 'Published under Apache 2.0 License',
23+
copyright: 'Copyright © 2023-Present Seeed Studio and SSCMA Contributors'
1924
}
2025
}
2126
})
@@ -30,32 +35,69 @@ function nav() {
3035
function sidebar() {
3136
return [
3237
{
33-
text: 'Introduction',
38+
text: 'Getting Started',
3439
collapsed: false,
3540
items: [
36-
{ text: 'What is SSCMA?', link: '/en/introduction/overview' },
37-
{ text: 'Quick Start', link: '/en/introduction/quick_start' },
41+
{ text: 'What is SSCMA?', link: '/introduction/overview' },
42+
{ text: 'Quick Start', link: '/introduction/quick_start' },
43+
{ text: 'Installation Guide', link: '/introduction/installation' }
3844
]
3945
},
40-
4146
{
42-
text: 'Edge Impulse',
47+
text: 'Tutorials',
4348
collapsed: false,
4449
items: [
4550
{
46-
text: 'Machine Learning Blocks',
47-
link: '/en/edgeimpulse/ei_ml_blocks',
51+
text: 'Workflow Overview',
52+
link: '/tutorials/overview',
53+
},
54+
{
55+
text: 'Model Training and Export',
56+
link: '/tutorials/training/overview',
57+
items: [
58+
{ text: 'FOMO Model', link: '/tutorials/training/fomo' },
59+
{ text: 'PFLD Model', link: '/tutorials/training/pfld' },
60+
{ text: 'RTMDet Model', link: '/tutorials/training/rtmdet' },
61+
{ text: 'VAE Model', link: '/tutorials/training/vae' }
62+
]
4863
},
64+
{
65+
text: 'Model Deployment',
66+
link: '/tutorials/deploy/overview',
67+
items: [
68+
{ text: 'Grove Vision AI V2', link: '/tutorials/deploy/grove_vision_ai_v2' },
69+
{ text: 'XIAO ESP32S3 Sense', link: '/tutorials/deploy/xiao_esp32s3' }
70+
]
71+
}
72+
]
73+
},
74+
{
75+
text: 'Customization',
76+
collapsed: false,
77+
items: [
78+
{ text: 'Basic Configuration Structure', link: '/custom/basics' },
79+
{ text: 'Model Structure', link: '/custom/model' },
80+
{ text: 'Training and Validation Pipelines', link: '/custom/pipelines' },
81+
{ text: 'Optimizers', link: '/custom/optimizer' },
82+
]
83+
},
84+
{
85+
text: 'Datasets',
86+
collapsed: false,
87+
items: [
88+
{ text: 'Public Datasets', link: '/datasets/public' },
89+
{ text: 'Custom Datasets', link: '/datasets/custom' },
90+
{ text: 'Dataset Formats and Extensions', link: '/datasets/extension' },
4991
]
5092
},
5193
{
5294
text: 'Community',
5395
collapsed: false,
5496
items: [
55-
{ text: 'FAQs', link: '/en/community/faqs' },
56-
{ text: 'Reference', link: '/en/community/reference' },
57-
{ text: 'Contribution', link: '/en/community/contributing' },
58-
{ text: 'Copyrights and Licenses', link: '/en/community/license' }
97+
{ text: 'FAQs', link: '/community/faqs' },
98+
{ text: 'Reference Documentation', link: '/community/reference' },
99+
{ text: 'Contribution Guide', link: '/community/contributing' },
100+
{ text: 'Open Source License', link: '/community/license' }
59101
]
60102
}
61103
]

docs/.vitepress/locales/zh_CN.ts

Lines changed: 42 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -39,17 +39,55 @@ function sidebar() {
3939
collapsed: false,
4040
items: [
4141
{ text: '什么是 SSCMA?', link: '/zh_cn/introduction/overview' },
42-
{ text: '快速上手', link: '/zh_cn/introduction/quickstart' },
42+
{ text: '快速上手', link: '/zh_cn/introduction/quick_start' },
43+
{ text: '安装指南', link: '/zh_cn/introduction/installation' }
4344
]
4445
},
4546
{
46-
text: 'Edge Impulse',
47+
text: '基础教程',
4748
collapsed: false,
4849
items: [
4950
{
50-
text: 'Edge Impulse 机器学习块',
51-
link: '/zh_cn/edgeimpulse/ei_ml_blocks',
51+
text: '流程概览',
52+
link: '/zh_cn/tutorials/overview',
5253
},
54+
{
55+
text: '模型训练与导出',
56+
link: '/zh_cn/tutorials/training/overview',
57+
items: [
58+
{ text: 'FOMO 模型', link: '/zh_cn/tutorials/training/fomo' },
59+
{ text: 'PFLD 模型', link: '/zh_cn/tutorials/training/pfld' },
60+
{ text: 'RTMDet 模型', link: '/zh_cn/tutorials/training/rtmdet' },
61+
{ text: 'VAE 模型', link: '/zh_cn/tutorials/training/vae' }
62+
]
63+
},
64+
{
65+
text: '模型部署',
66+
link: '/zh_cn/tutorials/deploy/overview',
67+
items: [
68+
{ text: 'Grove Vision AI V2', link: '/zh_cn/tutorials/deploy/grove_vision_ai_v2' },
69+
{ text: 'XIAO ESP32S3 Sense', link: '/zh_cn/tutorials/deploy/xiao_esp32s3' }
70+
]
71+
}
72+
]
73+
},
74+
{
75+
text: '自定义',
76+
collapsed: false,
77+
items: [
78+
{ text: '基础配置结构', link: '/zh_cn/custom/basics' },
79+
{ text: '模型结构', link: '/zh_cn/custom/model' },
80+
{ text: '训练与验证管线', link: '/zh_cn/custom/pipelines' },
81+
{ text: '优化器', link: '/zh_cn/custom/optimizer' },
82+
]
83+
},
84+
{
85+
text: '数据集',
86+
collapsed: false,
87+
items: [
88+
{ text: '公共数据集', link: '/zh_cn/datasets/public' },
89+
{ text: '自制数据集', link: '/zh_cn/datasets/custom' },
90+
{ text: '数据集格式与扩展', link: '/zh_cn/datasets/extension' },
5391
]
5492
},
5593
{

docs/.vitepress/theme/style.css

Lines changed: 6 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -84,3 +84,9 @@
8484
.DocSearch {
8585
--docsearch-primary-color: var(--vp-c-brand) !important;
8686
}
87+
88+
89+
.mermaid {
90+
display: flex;
91+
justify-content: center;
92+
}

docs/community/contributing.md

Lines changed: 52 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,52 @@
1+
# Contribute
2+
3+
Contributions to [SSCMA](https://github.com/Seeed-Studio/ModelAssistant) are welcome! We welcome contributions of any kind, including but not limited to:
4+
5+
- Fixing bugs
6+
7+
The steps for fixing code implementation errors are as follows:
8+
9+
If the submitted code changes are large, it is recommended to submit an issue first, and correctly describe the phenomenon of the issue, the cause and the way to reproduce it, and discuss and confirm the fixing plan.
10+
11+
Fix the bug and add the corresponding unit test, and submit the pull request.
12+
13+
- New feature or component
14+
15+
If a new feature or module involves large code changes, it is recommended to submit an issue first to confirm the necessity of the feature.
16+
17+
Implement the new feature, add unit tests, and submit a pull request.
18+
19+
- Documentation additions
20+
21+
Fix the documentation to submit a pull request directly.
22+
23+
The steps to add documentation or translate it into another language are as follows.
24+
25+
Submit an issue to confirm the need to add the documentation.
26+
27+
## How to Contribute
28+
29+
Please refer to the [Github Documentation - Collaborating](https://docs.github.com/en/pull-requests/collaborating-with-pull-requests/proposing-changes-to-your-work-with-pull-requests/about-pull-requests).
30+
31+
## Commit Style
32+
33+
We recommend that you follow the following authoring principles when writing your commit, as this will make our project cleaner and easier to iterate.
34+
35+
```
36+
build: build related changes
37+
chore: typo fixes, library updates, etc.
38+
ci: continue integration related changes
39+
deps: dependencies update
40+
docs: docs related changes
41+
feat: new feactures
42+
fix: fix issues
43+
perf: add perf results
44+
refactor: refactor components
45+
revert: undo some changes
46+
style: code style changes
47+
test: test cases changes
48+
```
49+
50+
## Permissions Section
51+
52+
After a contribution is submitted, you agree to the project's [License](./license).

docs/community/faqs.md

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,3 @@
1+
# FAQs
2+
3+
If you encounter any problems using SSCMA, submitting your questions on [SSCMA Issues](https://github.com/Seeed-Studio/ModelAssistant/issues) is welcomed, and we will reply to you as soon as possible. We also organize some common problems so that you can find solutions in a timely manner.

0 commit comments

Comments
 (0)