Skip to content

Commit 1a90fa8

Browse files
AronLincclauss
andauthored
[Fix] Fix a lot of typos (#6190)
* pre-commit: Add codespell to look for typos * fixup! Indentation * Update lint * Fix lint * Fix typo * Fix comments Co-authored-by: Christian Clauss <[email protected]>
1 parent c44a058 commit 1a90fa8

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

51 files changed

+95
-95
lines changed

configs/instaboost/README.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -32,7 +32,7 @@ InstaBoost have been already integrated in the data pipeline, thus all you need
3232

3333
## Results and Models
3434

35-
- All models were trained on `coco_2017_train` and tested on `coco_2017_val` for conveinience of evaluation and comparison. In the paper, the results are obtained from `test-dev`.
35+
- All models were trained on `coco_2017_train` and tested on `coco_2017_val` for convenience of evaluation and comparison. In the paper, the results are obtained from `test-dev`.
3636
- To balance accuracy and training time when using InstaBoost, models released in this page are all trained for 48 Epochs. Other training and testing configs strictly follow the original framework.
3737
- For results and models in MMDetection V1.x, please refer to [Instaboost](https://github.com/GothicAi/Instaboost).
3838

configs/scnet/README.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -48,4 +48,4 @@ The results on COCO 2017val are shown in the below table. (results on test-dev a
4848
### Notes
4949

5050
- Training hyper-parameters are identical to those of [HTC](https://github.com/open-mmlab/mmdetection/tree/master/configs/htc).
51-
- TTA means Test Time Augmentation, which applies horizonal flip and multi-scale testing. Refer to [config](https://github.com/open-mmlab/mmdetection/tree/master/configs/scnet/scnet_r50_fpn_1x_coco.py).
51+
- TTA means Test Time Augmentation, which applies horizontal flip and multi-scale testing. Refer to [config](https://github.com/open-mmlab/mmdetection/tree/master/configs/scnet/scnet_r50_fpn_1x_coco.py).

docs/3_exist_data_new_model.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,6 @@
11
# 3: Train with customized models and standard datasets
22

3-
In this note, you will know how to train, test and inference your own customized models under standard datasets. We use the cityscapes dataset to train a customized Cascade Mask R-CNN R50 model as an example to demonstrate the whole process, which using [`AugFPN`](https://github.com/Gus-Guo/AugFPN) to replace the defalut `FPN` as neck, and add `Rotate` or `Translate` as training-time auto augmentation.
3+
In this note, you will know how to train, test and inference your own customized models under standard datasets. We use the cityscapes dataset to train a customized Cascade Mask R-CNN R50 model as an example to demonstrate the whole process, which using [`AugFPN`](https://github.com/Gus-Guo/AugFPN) to replace the default `FPN` as neck, and add `Rotate` or `Translate` as training-time auto augmentation.
44

55
The basic steps are as below:
66

docs/changelog.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -385,7 +385,7 @@ In v2.12.0 MMDetection inevitably brings some BC-breakings, including the MMCV d
385385
- Update documentations (#4642, #4650, #4620, #4630)
386386
- Remove redundant code calling `import_modules_from_strings` (#4601)
387387
- Clean deprecated FP16 API (#4571)
388-
- Check whether `CLASSES` is correctly initialized in the intialization of `XMLDataset` (#4555)
388+
- Check whether `CLASSES` is correctly initialized in the initialization of `XMLDataset` (#4555)
389389
- Support batch inference in the inference API (#4462, #4526)
390390
- Clean deprecated warning and fix 'meta' error (#4695)
391391

@@ -579,7 +579,7 @@ Function `get_subset_by_classes` in dataset is refactored and only filters out i
579579
- Fix the bug of training ATSS when there is no ground truth boxes (#3702)
580580
- Fix the bug of using Focal Loss when there is `num_pos` is 0 (#3702)
581581
- Fix the label index mapping in dataset browser (#3708)
582-
- Fix Mask R-CNN training stuck problem when ther is no positive rois (#3713)
582+
- Fix Mask R-CNN training stuck problem when their is no positive rois (#3713)
583583
- Fix the bug of `self.rpn_head.test_cfg` in `RPNTestMixin` by using `self.rpn_head` in rpn head (#3808)
584584
- Fix deprecated `Conv2d` from mmcv.ops (#3791)
585585
- Fix device bug in RepPoints (#3836)
@@ -594,7 +594,7 @@ Function `get_subset_by_classes` in dataset is refactored and only filters out i
594594

595595
- Change to use `mmcv.utils.collect_env` for collecting environment information to avoid duplicate codes (#3779)
596596
- Update checkpoint file names to v2.0 models in documentation (#3795)
597-
- Update tutorials for changing runtime settings (#3778), modifing loss (#3777)
597+
- Update tutorials for changing runtime settings (#3778), modifying loss (#3777)
598598
- Improve the function of `simple_test_bboxes` in SABL (#3853)
599599
- Convert mask to bool before using it as img's index for robustness and speedup (#3870)
600600
- Improve documentation of modules and dataset customization (#3821)

docs/faq.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -47,7 +47,7 @@ We list some common troubles faced by many users and their corresponding solutio
4747
2. You may also need to check the compatibility between the `setuptools`, `Cython`, and `PyTorch` in your environment.
4848

4949
- "Segmentation fault".
50-
1. Check you GCC version and use GCC 5.4. This usually caused by the incompatibility between PyTorch and the environment (e.g., GCC < 4.9 for PyTorch). We also recommand the users to avoid using GCC 5.5 because many feedbacks report that GCC 5.5 will cause "segmentation fault" and simply changing it to GCC 5.4 could solve the problem.
50+
1. Check you GCC version and use GCC 5.4. This usually caused by the incompatibility between PyTorch and the environment (e.g., GCC < 4.9 for PyTorch). We also recommend the users to avoid using GCC 5.5 because many feedbacks report that GCC 5.5 will cause "segmentation fault" and simply changing it to GCC 5.4 could solve the problem.
5151

5252
2. Check whether PyTorch is correctly installed and could use CUDA op, e.g. type the following command in your terminal.
5353

@@ -73,7 +73,7 @@ We list some common troubles faced by many users and their corresponding solutio
7373
1. Check if the dataset annotations are valid: zero-size bounding boxes will cause the regression loss to be Nan due to the commonly used transformation for box regression. Some small size (width or height are smaller than 1) boxes will also cause this problem after data augmentation (e.g., instaboost). So check the data and try to filter out those zero-size boxes and skip some risky augmentations on the small-size boxes when you face the problem.
7474
2. Reduce the learning rate: the learning rate might be too large due to some reasons, e.g., change of batch size. You can rescale them to the value that could stably train the model.
7575
3. Extend the warmup iterations: some models are sensitive to the learning rate at the start of the training. You can extend the warmup iterations, e.g., change the `warmup_iters` from 500 to 1000 or 2000.
76-
4. Add gradient clipping: some models requires gradient clipping to stablize the training process. The default of `grad_clip` is `None`, you can add gradient clippint to avoid gradients that are too large, i.e., set `optimizer_config=dict(_delete_=True, grad_clip=dict(max_norm=35, norm_type=2))` in your config file. If your config does not inherits from any basic config that contains `optimizer_config=dict(grad_clip=None)`, you can simply add `optimizer_config=dict(grad_clip=dict(max_norm=35, norm_type=2))`.
76+
4. Add gradient clipping: some models requires gradient clipping to stabilize the training process. The default of `grad_clip` is `None`, you can add gradient clippint to avoid gradients that are too large, i.e., set `optimizer_config=dict(_delete_=True, grad_clip=dict(max_norm=35, norm_type=2))` in your config file. If your config does not inherits from any basic config that contains `optimizer_config=dict(grad_clip=None)`, you can simply add `optimizer_config=dict(grad_clip=dict(max_norm=35, norm_type=2))`.
7777
- ’GPU out of memory"
7878
1. There are some scenarios when there are large amount of ground truth boxes, which may cause OOM during target assignment. You can set `gpu_assign_thr=N` in the config of assigner thus the assigner will calculate box overlaps through CPU when there are more than N GT boxes.
7979
2. Set `with_cp=True` in the backbone. This uses the sublinear strategy in PyTorch to reduce GPU memory cost in the backbone.

docs/get_started.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -71,7 +71,7 @@ If mmcv and mmcv-full are both installed, there will be `ModuleNotFoundError`.
7171
conda install pytorch=1.3.1 cudatoolkit=9.2 torchvision=0.4.2 -c pytorch
7272
```
7373

74-
If you build PyTorch from source instead of installing the prebuilt pacakge,
74+
If you build PyTorch from source instead of installing the prebuilt package,
7575
you can use more CUDA versions such as 9.0.
7676

7777
### Install MMDetection

docs/robustness_benchmarking.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -34,7 +34,7 @@ pip install imagecorruptions
3434
```
3535

3636
Compared to imagenet-c a few changes had to be made to handle images of arbitrary size and greyscale images.
37-
We also modfied the 'motion blur' and 'snow' corruptions to remove dependency from a linux specific library,
37+
We also modified the 'motion blur' and 'snow' corruptions to remove dependency from a linux specific library,
3838
which would have to be installed separately otherwise. For details please refer to the [imagecorruptions repository](https://github.com/bethgelab/imagecorruptions).
3939

4040
## Inference with pretrained models

docs/tutorials/config.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -297,7 +297,7 @@ test_pipeline = [
297297
std=[58.395, 57.12, 57.375],
298298
to_rgb=True),
299299
dict(
300-
type='Pad', # Padding config to pad images divisable by 32.
300+
type='Pad', # Padding config to pad images divisible by 32.
301301
size_divisor=32),
302302
dict(
303303
type='ImageToTensor', # convert image to tensor
@@ -387,7 +387,7 @@ evaluation = dict( # The config to build the evaluation hook, refer to https://
387387
metric=['bbox', 'segm']) # Metrics used during evaluation
388388
optimizer = dict( # Config used to build optimizer, support all the optimizers in PyTorch whose arguments are also the same as those in PyTorch
389389
type='SGD', # Type of optimizers, refer to https://github.com/open-mmlab/mmdetection/blob/master/mmdet/core/optimizer/default_constructor.py#L13 for more details
390-
lr=0.02, # Learning rate of optimizers, see detail usages of the parameters in the documentaion of PyTorch
390+
lr=0.02, # Learning rate of optimizers, see detail usages of the parameters in the documentation of PyTorch
391391
momentum=0.9, # Momentum
392392
weight_decay=0.0001) # Weight decay of SGD
393393
optimizer_config = dict( # Config used to build the optimizer hook, refer to https://github.com/open-mmlab/mmcv/blob/master/mmcv/runner/hooks/optimizer.py#L8 for implementation details.

docs/tutorials/customize_dataset.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -45,7 +45,7 @@ The annotation json files in COCO format has the following necessary keys:
4545

4646
There are three necessary keys in the json file:
4747

48-
- `images`: contains a list of images with their informations like `file_name`, `height`, `width`, and `id`.
48+
- `images`: contains a list of images with their information like `file_name`, `height`, `width`, and `id`.
4949
- `annotations`: contains the list of instance annotations.
5050
- `categories`: contains the list of categories names and their ID.
5151

@@ -157,7 +157,7 @@ We use this way to support CityScapes dataset. The script is in [cityscapes.py](
157157
**Note**
158158

159159
1. For instance segmentation datasets, **MMDetection only supports evaluating mask AP of dataset in COCO format for now**.
160-
2. It is recommanded to convert the data offline before training, thus you can still use `CocoDataset` and only need to modify the path of annotations and the training classes.
160+
2. It is recommended to convert the data offline before training, thus you can still use `CocoDataset` and only need to modify the path of annotations and the training classes.
161161

162162
### Reorganize new data format to middle format
163163

docs/tutorials/customize_runtime.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -276,7 +276,7 @@ custom_hooks = [dict(type='NumClassCheckHook')]
276276

277277
### Modify default runtime hooks
278278

279-
There are some common hooks that are not registerd through `custom_hooks`, they are
279+
There are some common hooks that are not registered through `custom_hooks`, they are
280280

281281
- log_config
282282
- checkpoint_config

0 commit comments

Comments
 (0)