Hello there! 😀
This project was born from the idea of applying such algorithms to camera traps located on my family’s property in a remote area of Italy, where wildlife is frequently observed. The goal was to develop a simple application for fixed camera trap systems, where a basic motion detection algorithm could automatically identify subjects to be analyzed by a fine-tuned machine learning model (image-classification).
Example of results applied to camera traps located on my family’s property 👀 (not used for training but only for inference purposes):
Using grad-cam, we can have some model explainability: we can see how the model correctly classified the fox and the region of the image where the model focused to reach this conclusion is highlighted:
N.B. This is a work in progress, built with limited resources. Community support is always welcome! 💪
We suggest to use PyCharm Community for following steps 2-8.
- Install Python 3.9: Make sure you have Python installed on your system. You can download it from the official Python website (https://www.python.org/) and follow the installation instructions for your operating system;
- Clone the repository;
- Create a virtual environment;
- Activate the virtual environment;
- Mark
camera_trapsfolder as root directory; - Install project dependencies:
pip install -r requirements.txt
- Run the commands for further project dependencies:
poetry lock --no-updatepoetry install
- Run main project script:
python camera_traps/main.py
Now you're all set! 🎉 Happy coding! 😄✨
- Install Docker: Visit the official Docker website (https://www.docker.com/) and follow the installation instructions for your operating system;
- Clone the repository;
- Build the Docker image: navigate to the project's root directory and run the following command to build the Docker image:
docker build -t project_name .
- Run the Docker container: Once the image is built, start a container with the following command:
docker run -it project_name
🚀 This will launch the project within the Docker container! 🐳
Currently, the model being used is EfficientNetB0 (https://keras.io/api/applications/), which
was implemented to undergo fine-tuning using the custom dataset. The choice of this model was
driven by its high accuracy and relatively low number of parameters. More recent series of the same
model result in a decrease in computational performance.
The training sessions were conducted using an NVIDIA GPU GeForce 940MX.
Some of the best weights obtained after fine-tuning are available at the Google Drive link.
The dataset used for training is available at the Google Drive link (~ 2.5 GB).
The current dataset has been obtained by combining multiple sources of data available online in order to assemble a dataset of images captured by camera traps in both daytime and nighttime settings. The currently available image classes are as follows:
| label | setting | count |
|---|---|---|
| None_of_the_above | day | 3000 |
| None_of_the_above | night | 400 |
| badger | day | 955 |
| badger | night | 1474 |
| badger | unspecified | 18 |
| bear | day | 985 |
| bear | night | 420 |
| bear | unspecified | 779 |
| bird | unspecified | 2777 |
| boar | day | 1287 |
| boar | night | 675 |
| boar | unspecified | 775 |
| cat | day | 1045 |
| cat | night | 935 |
| cat | unspecified | 4759 |
| chicken | unspecified | 680 |
| cow | day | 1351 |
| cow | night | 103 |
| cow | unspecified | 1138 |
| deer | day | 3805 |
| deer | night | 2286 |
| deer | unspecified | 561 |
| dog | day | 1360 |
| dog | night | 124 |
| dog | unspecified | 3291 |
| fox | day | 1408 |
| fox | night | 1320 |
| fox | unspecified | 8 |
| hare | day | 20 |
| hare | night | 1262 |
| hare | unspecified | 5110 |
| horse | unspecified | 62 |
| human | unspecified | 2980 |
| squirrel | unspecified | 2775 |
| vehicle | unspecified | 2829 |
| weasel | day | 1907 |
| weasel | night | 1119 |
The dataset folder structure is then organized as follows:
root
└── dataset
├── label1
│ ├─ image1.jpg
│ ├─ image2.png
│ └─ ...
└── label2
└── ...
The filename of each image is defined as follows:
{referenceNameDataset}_{nameLabel}_{timeCondition}_{progressiveIndex}.jpg
N.B. The underscores are only used as separators, otherwise CamelCase notation has been used.
The timeCondition field can be: 'day', 'night', 'unspecified'.
Useful dataset links:
-
NTLNP (wildlife image dataset): https://paperswithcode.com/dataset/ntlnp-wildlife-image-dataset
-
CCT20 (subset): https://lila.science/datasets/caltech-camera-traps
-
Sheffield: https://figshare.shef.ac.uk/articles/dataset/Badger_datasets_for_image_recognition/8182370/1
-
LilaMissouri: https://lila.science/datasets/missouricameratraps
-
PennFudan: https://www.cis.upenn.edu/~jshi/ped_html/
MIT
Please open an issue or contact pietro.foini1@gmail.com with any questions.



