Skip to content

Commit b515740

Browse files
lucasalvaaSimoCimmiMorganVitiello
committed
Add README.md
Co-authored-by: SimoCimmi <simonecimmino2004@gmail.com> Co-authored-by: Morgan Vitiello <morgan.vitiello06@gmail.com>
1 parent 6dab1bf commit b515740

File tree

8 files changed

+62
-0
lines changed

8 files changed

+62
-0
lines changed

.github/images/baseline.png

55.1 KB
Loading

.github/images/classes.png

168 KB
Loading

.github/images/pipeline1.png

66.1 KB
Loading

.github/images/pipeline2.png

78 KB
Loading

.github/images/pipeline3.png

30.3 KB
Loading

.github/images/results.png

245 KB
Loading

.github/images/safepet.png

189 KB
Loading

README.md

Lines changed: 62 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,62 @@
1+
# SkinDetector
2+
SkinDetector is a deep-learning AI module capable of diagnosing canine dermatological diseases by observing an image of your dog's skin.
3+
4+
SkinDetector is part of the [SafePet project](https://github.com/Progetto-SafePet).
5+
6+
![SafePet Logo](./.github/images/safepet.png)
7+
8+
9+
## Authors
10+
Simone Cimmino [@SimoCimmi](https://github.com/SimoCimmi) <br>
11+
Luca Salvatore [@lucasalvaa](https://github.com/lucasalvaa) <br>
12+
Morgan Vitiello [@MorganVitiello](https://github.com/MorganVitiello)
13+
14+
## Credits
15+
The project was developed at the University of Salerno, Department of Computer Science, in the academic year 2025-26 for
16+
the exam of Fundamentals of Artificial Intelligence helb by Professor Fabio Palomba [@fpalomba](https://github.com/fpalomba), whom we thank for his support.
17+
18+
## Dataset
19+
The dataset on which the models were trained is [Dog's skin diseases (Image Dataset)](https://www.kaggle.com/datasets/youssefmohmmed/dogs-skin-diseases-image-dataset).
20+
21+
![class distribution](./.github/images/classes.png)
22+
23+
## Pipelines
24+
To select the final model, four slightly different training pipelines were undertaken. Each pipeline is defined in the experiments/[pipeline] directory in the relative dvc.yaml file.
25+
26+
### Baseline
27+
The training will be performed on the starting dataset without duplicates.
28+
29+
![baseline](./.github/images/baseline.png)
30+
31+
### Data Augmentation (Pipeline 1)
32+
Model training will be performed after creating synthetic samples to increase
33+
the size of the dataset. We selected four techniques to increase the image volume of the training set:
34+
horizontal flipping, vertical flipping, noise injection, and saturation enhancement. Of these four,
35+
only two techniques will be applied to each image in the initial training set, according to a
36+
randomized criterion. After the data augmentation phase, the size of the training set
37+
will be tripled.
38+
39+
![pipeline1](./.github/images/pipeline1.png)
40+
41+
### Two-Stage Fine-Tuning (Pipeline 2)
42+
This approach, formalized by [ValizadehAslani et al.](https://arxiv.org/abs/2207.10858), is designed to address the
43+
class-imbalance problem of the training set. Unlike standard fine-tuning,
44+
which often causes the model to ignore minority classes in favor of majority classes,
45+
this technique divides the learning process into two distinct phases.
46+
In the first phase, only the last layer of the model is trained, freezing all other backbone layers.
47+
The weighted loss function [LDAM Loss](https://arxiv.org/abs/1906.07413) is used, which assigns larger margins to minority classes,
48+
forcing the classifier to minimize the error on them despite the sparseness of samples.
49+
In the second phase, fine-tuning is performed by unfreezing all model layers and using the traditional Cross-Entropy loss function.
50+
51+
![pipeline2](./.github/images/pipeline2.png)
52+
53+
54+
### Data Augmentation + Two-Stage Fine-Tuning (Pipeline 3)
55+
We combine the data augmentation strategy of Pipeline 1 with the two-stage fine-tuning strategy of Pipeline 2.
56+
57+
![pipeline3](./.github/images/pipeline3.png)
58+
59+
## Results
60+
61+
The table shows the results of the experiments. The final model chosen is EfficientNetV2_S trained in pipeline 3.
62+
![results](.github/images/results.png)

0 commit comments

Comments
 (0)