Skip to content

Commit 4243ad2

Browse files
Upload algorithm file
Signed-off-by: Frank-lilinjie <lilinjie@bupt.edu.cn>
1 parent dde86b6 commit 4243ad2

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

49 files changed

+5864
-0
lines changed
Lines changed: 103 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,103 @@
1+
# Quick Start
2+
3+
Welcome to Ianvs! Ianvs aims to test the performance of distributed synergy AI solutions following recognized standards,
4+
in order to facilitate more efficient and effective development. Quick start helps you to test your algorithm on Ianvs
5+
with a simple example of industrial defect detection. You can reduce manual procedures to just a few steps so that you can
6+
build and start your distributed synergy AI solution development within minutes.
7+
8+
Before using Ianvs, you might want to have the device ready:
9+
- One machine is all you need, i.e., a laptop or a virtual machine is sufficient and a cluster is not necessary
10+
- 2 CPUs or more
11+
- 4GB+ free memory, depends on algorithm and simulation setting
12+
- 10GB+ free disk space
13+
- Internet connection for GitHub and pip, etc
14+
- Python 3.6+ installed
15+
16+
In this example, we are using the Linux platform with Python 3.6.9. If you are using Windows, most steps should still apply but a few like commands and package requirements might be different.
17+
18+
## Step 1. Ianvs Preparation
19+
20+
First, we download the code of Ianvs. Assuming that we are using `/ianvs` as workspace, Ianvs can be cloned with `Git`
21+
as:
22+
23+
``` shell
24+
mkdir /ianvs
25+
cd /ianvs #One might use another path preferred
26+
27+
mkdir project
28+
cd project
29+
git clone https://github.com/kubeedge/ianvs.git
30+
```
31+
32+
33+
Then, we install third-party dependencies for ianvs.
34+
``` shell
35+
sudo apt-get update
36+
sudo apt-get install libgl1-mesa-glx -y
37+
python -m pip install --upgrade pip
38+
39+
cd ianvs
40+
python -m pip install ./examples/resources/third_party/*
41+
python -m pip install -r requirements.txt
42+
```
43+
44+
We are now ready to install Ianvs.
45+
``` shell
46+
python setup.py install
47+
```
48+
49+
## Step 2. Dataset Preparation
50+
51+
Datasets and models can be large. To avoid over-size projects in the Github repository of Ianvs, the Ianvs code base does
52+
not include origin datasets. Then developers do not need to download non-necessary datasets for a quick start.
53+
54+
``` shell
55+
cd /ianvs #One might use another path preferred
56+
mkdir dataset
57+
cd dataset
58+
wget https://kubeedge.obs.cn-north-1.myhuaweicloud.com/ianvs/curb-detection/curb-detection.zip
59+
unzip dataset.zip
60+
```
61+
62+
The URL address of this dataset then should be filled in the configuration file ``testenv.yaml``. In this quick start,
63+
we have done that for you and the interested readers can refer to [testenv.yaml](https://ianvs.readthedocs.io/en/latest/guides/how-to-test-algorithms.html#step-1-test-environment-preparation) for more details.
64+
65+
<!-- Please put the downloaded dataset on the above dataset path, e.g., `/ianvs/dataset`. One can transfer the dataset to the path, e.g., on a remote Linux system using [XFTP]. -->
66+
67+
68+
Related algorithm is also ready in this quick start.
69+
``` shell
70+
export PYTHONPATH=$PYTHONPATH:/ianvs/project/examples/curb-detection/lifelong_learning_bench/testalgorithms/rfnet/RFNet
71+
```
72+
73+
The URL address of this algorithm then should be filled in the configuration file ``algorithm.yaml``. In this quick
74+
start, we have done that for you and the interested readers can refer to [algorithm.yaml](https://ianvs.readthedocs.io/en/latest/guides/how-to-test-algorithms.html#step-1-test-environment-preparation) for more details.
75+
76+
## Step 3. Ianvs Execution and Presentation
77+
78+
We are now ready to run the ianvs for benchmarking.
79+
80+
``` shell
81+
cd /ianvs/project
82+
ianvs -f examples/curb-detection/lifelong_learning_bench/benchmarkingjob.yaml
83+
```
84+
85+
Finally, the user can check the result of benchmarking on the console and also in the output path(
86+
e.g. `/ianvs/lifelong_learning_bench/workspace`) defined in the benchmarking config file (
87+
e.g. `benchmarkingjob.yaml`). In this quick start, we have done all configurations for you and the interested readers
88+
can refer to [benchmarkingJob.yaml](https://ianvs.readthedocs.io/en/latest/guides/how-to-test-algorithms.html#step-1-test-environment-preparation) for more details.
89+
90+
The final output might look like this:
91+
92+
|rank |algorithm |accuracy |samples_transfer_ratio|paradigm |basemodel |task_definition |task_allocation |basemodel-learning_rate |task_definition-origins|task_allocation-origins |
93+
|:----:|:-----------------------:|:--------:|:--------------------:|:------------------:|:---------:|:--------------------:|:---------------------:|:-----------------------:|:----------------------|:-----------------------|
94+
|1 |rfnet_lifelong_learning | 0.2123 |0.4649 |lifelonglearning | BaseModel |TaskDefinitionByOrigin| TaskAllocationByOrigin|0.0001 |['real', 'sim'] |['real', 'sim'] |
95+
96+
97+
This ends the quick start experiment.
98+
99+
# What is next
100+
101+
If any problems happen, the user can refer to [the issue page on Github](https://github.com/kubeedge/ianvs/issues) for help and are also welcome to raise any new issue.
102+
103+
Enjoy your journey on Ianvs!
Lines changed: 72 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,72 @@
1+
benchmarkingjob:
2+
# job name of bechmarking; string type;
3+
name: "benchmarkingjob"
4+
# the url address of job workspace that will reserve the output of tests; string type;
5+
workspace: "/ianvs/lifelong_learning_bench/workspace"
6+
7+
# the url address of test environment configuration file; string type;
8+
# the file format supports yaml/yml;
9+
testenv: "./examples/curb-detection/lifelong_learning_bench/testenv/testenv.yaml"
10+
11+
# the configuration of test object
12+
test_object:
13+
# test type; string type;
14+
# currently the option of value is "algorithms",the others will be added in succession.
15+
type: "algorithms"
16+
# test algorithm configuration files; list type;
17+
algorithms:
18+
# algorithm name; string type;
19+
- name: "rfnet_lifelong_learning"
20+
# the url address of test algorithm configuration file; string type;
21+
# the file format supports yaml/yml
22+
url: "./examples/curb-detection/lifelong_learning_bench/testalgorithms/rfnet/rfnet_algorithm.yaml"
23+
24+
# the configuration of ranking leaderboard
25+
rank:
26+
# rank leaderboard with metric of test case's evaluation and order ; list type;
27+
# the sorting priority is based on the sequence of metrics in the list from front to back;
28+
sort_by: [ { "accuracy": "descend" }, { "samples_transfer_ratio": "ascend" } ]
29+
30+
# visualization configuration
31+
visualization:
32+
# mode of visualization in the leaderboard; string type;
33+
# There are quite a few possible dataitems in the leaderboard. Not all of them can be shown simultaneously on the screen.
34+
# In the leaderboard, we provide the "selected_only" mode for the user to configure what is shown or is not shown.
35+
mode: "selected_only"
36+
# method of visualization for selected dataitems; string type;
37+
# currently the options of value are as follows:
38+
# 1> "print_table": print selected dataitems;
39+
method: "print_table"
40+
41+
# selected dataitem configuration
42+
# The user can add his/her interested dataitems in terms of "paradigms", "modules", "hyperparameters" and "metrics",
43+
# so that the selected columns will be shown.
44+
selected_dataitem:
45+
# currently the options of value are as follows:
46+
# 1> "all": select all paradigms in the leaderboard;
47+
# 2> paradigms in the leaderboard, e.g., "singletasklearning"
48+
paradigms: [ "all" ]
49+
# currently the options of value are as follows:
50+
# 1> "all": select all modules in the leaderboard;
51+
# 2> modules in the leaderboard, e.g., "basemodel"
52+
modules: [ "all" ]
53+
# currently the options of value are as follows:
54+
# 1> "all": select all hyperparameters in the leaderboard;
55+
# 2> hyperparameters in the leaderboard, e.g., "momentum"
56+
hyperparameters: [ "all" ]
57+
# currently the options of value are as follows:
58+
# 1> "all": select all metrics in the leaderboard;
59+
# 2> metrics in the leaderboard, e.g., "F1_SCORE"
60+
metrics: [ "accuracy", "samples_transfer_ratio" ]
61+
62+
# model of save selected and all dataitems in workspace `./rank` ; string type;
63+
# currently the options of value are as follows:
64+
# 1> "selected_and_all": save selected and all dataitems;
65+
# 2> "selected_only": save selected dataitems;
66+
save_mode: "selected_and_all"
67+
68+
69+
70+
71+
72+
Lines changed: 38 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,38 @@
1+
from basemodel import val_args
2+
from utils.metrics import Evaluator
3+
from tqdm import tqdm
4+
from dataloaders import make_data_loader
5+
from sedna.common.class_factory import ClassType, ClassFactory
6+
7+
__all__ = ('accuracy')
8+
9+
@ClassFactory.register(ClassType.GENERAL)
10+
def accuracy(y_true, y_pred, **kwargs):
11+
args = val_args()
12+
_, _, test_loader, num_class = make_data_loader(args, test_data=y_true)
13+
evaluator = Evaluator(num_class)
14+
15+
tbar = tqdm(test_loader, desc='\r')
16+
for i, (sample, img_path) in enumerate(tbar):
17+
if args.depth:
18+
image, depth, target = sample['image'], sample['depth'], sample['label']
19+
else:
20+
image, target = sample['image'], sample['label']
21+
if args.cuda:
22+
image, target = image.cuda(), target.cuda()
23+
if args.depth:
24+
depth = depth.cuda()
25+
26+
target[target > evaluator.num_class-1] = 255
27+
target = target.cpu().numpy()
28+
# Add batch sample into evaluator
29+
evaluator.add_batch(target, y_pred[i])
30+
31+
# Test during the training
32+
# Acc = evaluator.Pixel_Accuracy()
33+
CPA = evaluator.Pixel_Accuracy_Class()
34+
mIoU = evaluator.Mean_Intersection_over_Union()
35+
FWIoU = evaluator.Frequency_Weighted_Intersection_over_Union()
36+
37+
print("CPA:{}, mIoU:{}, fwIoU: {}".format(CPA, mIoU, FWIoU))
38+
return CPA

0 commit comments

Comments
 (0)