This repository is a collection of all the software required to offload functions for the wildfire use case (use-case 2) and to analyze the data generated by a sample TT Fire device.
The repository contains the following folders:
example: Contains the software and data used for the demos. This directory also includes arequirements.txtfile needed to run the demos. See Demos.FireUC: This folder must be included in the COGNIT VM image in order for it to run. It also contains arequirements.txtfile for the COGNIT VM environment, as well as a pre-trained neural network model in TensorFlow format that is imported by the function. See Setting up the image.dataset: Contains a sample dataset along with Python scripts to parse the data. See Datasets.
The pre-trained neural network model used for this test is available at the following link. Minor modifications have been made to the original software to adapt it to our requirements.
The virtual machine requirements to run the image recognition function are:
- At least 3.072 GB of RAM
- 8 GB of disk space
- A host CPU that supports SSE4.1 instructions
- A VM template CPU mode set to host-passthrough
To set up the image, upload the contents of the FireUC folder to the VM under /root/userapp, and rename the model folder to InceptionV1-OnFire.
Then, activate the virtual environment (see serverless-runtime):
source /root/serverless-runtime/serverless-env/bin/activateThe libGL library (required for installing opencv-contrib-python) can be installed on openSUSE distributions using the following commands:
zypper install Mesa-libGL1
zypper install libgthread-2_0-0Finally, install the requirements listed in /root/userapp/requirements.txt:
pip install -r /root/userapp/requirements.txtTo run both demos, a file named cognit.yml is required. This file must define two parameters used by the device client framework:
api_endpoint:https://cognit-lab-frontend.sovereignedge.eucredentials: A colon-separated username and password used by the device client to access the COGNIT frontend.
See device-runtime-py for more information.
The geographical model (uc2_fire_map_async.py) represents a geographical simulation of a fire. When starting the program, a dashboard will load on a browser allowing the user to select the map where the fire will take place, the wind direction and speed. As the parameters are adjusted, devices will be placed randomly in the selected area, and an additional random point will be selected as the starting location of the fire. Once the simulation is started, two ellipses will appear. An inner red one showing the current fire spread, and an outer yellow one to show the distance the distress signal will reach (used for activating nearby devices). During each time step, both ellipses grow based on the wind direction and speed, where a faster wind speed increases the speed of the fire spread. After the ellipses grow, all devices inside the inner ellipse will offload to COGNIT a picture of a forest fire, while all devices in the outer ellipse will report a picture of a forest. All function offloads will be done asynchronously.
To run this demo, an additional configuration file called settings.toml is required. This file has three sections: general, map, and common-paths. The general section contains 4 different options:
number_of_devices: Number of devices on the map (mutually exclusive withdevice_spacing)device_spacing: Distance between devices when placed on a grid (mutually exclusive withnumber_of_devices)device_trigger_radius: Trigger radius of each device, in meterssimulation_steps: Number of simulation steps
The map and common-paths sections contain settings related to file locations on disk and parameters that can be modified via the dashboard.
The statistical model (uc2_fire_stat_async.py) simulates fire spread in a probabilistic manner. A fire starts at the center of a grid and has a chance to spread to nearby tiles. Over time, the probability of spreading increases.
Each tile may contain multiple devices. As in the geographical model, devices located in tiles where the fire has spread offload a wildfire image as the function argument. All function offloads to COGNIT are performed asynchronously.
A sample dataset similar to the data produced by TT Fire devices is available in the dataset directory.
The main difference between the provided sample data and the data produced by the TT Fire is that the sample dataset contains one RBG picture per hour, instead of capturing an image only when sensors detect a fire. This approach allows testing of RGB image processing without requiring an actual fire.
The folder contains the following files:
- 3 Python scripts to process the data:
extract_IR_images.py: To extract the images produced by the infrared cameraextract_JPG_images.py: To extract the images produced by the RGB cameraextract_sensor_data.py: To extract the sensor data to a CSV
- 4 data files, one for each sample device
requirements.txtwith the modules used by the Python scripts
The parameters collected by TT Fire devices and included in the dataset are:
- Carbon dioxide concentration in air (ppm)
- Ozone concentration in air (dimensionless number)
- Particulate matter: PM10, PM2.5, PM1 (µg/m3)
- Air temperature (°C)
- Air relative humidity (%)
- Foliage temperature (°C)
- IR camera data with a 32x24 resolution
- RGB camera for image acquisition
Each data file generally contains one row per received data record. Each row is a list of semicolon-separated values, with the first fields always being (in order):
- Timestamp
- Device serial number
- Record type
Depending on the record type, additional fields follow.