Skip to content

Commit 8bd6617

Browse files
committed
fix: fixed descriptions and mime types for services
1 parent 6b87bf2 commit 8bd6617

File tree

4 files changed

+6
-6
lines changed

4 files changed

+6
-6
lines changed

algorithm_catalog/vito/biopar/records/biopar.json

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -9,7 +9,7 @@
99
"created": "2024-10-03T00:00:00Z",
1010
"updated": "2024-10-03T00:00:00Z",
1111
"type": "apex_algorithm",
12-
"title": "Calculate various biophysical parameters",
12+
"title": "Biophysical parameters calculation",
1313
"description": "The algorithm offers a solution to calculate vegetation-related parameters like leaf area index, fraction of absorbed photosynthetically active radiation, and more.",
1414
"cost_estimate": 4,
1515
"cost_unit": "platform credits per km²",
@@ -115,7 +115,7 @@
115115
},
116116
{
117117
"rel": "example-output",
118-
"type": "application/json",
118+
"type": "image/tif",
119119
"title": "Example output",
120120
"href": "https://s3.waw3-1.cloudferro.com/swift/v1/apex-examples/biopar/biopar-example.tif"
121121
},

algorithm_catalog/vito/eurac_pv_farm_detection/records/eurac_pv_farm_detection.json

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -10,7 +10,7 @@
1010
"updated": "2024-10-16T00:00:00Z",
1111
"type": "apex_algorithm",
1212
"title": "EURAC Photovoltaic Farms mapping",
13-
"description": "Photovoltaic farms (PV farms) mapping is essential for establishing valid policies regarding natural resources management and clean energy.\n As evidenced by the recent COP28 summit, where almost 120 global leaders pledged to triple the worlds renewable energy capacity before 2030, it is crucial to make these mapping efforts scalable and reproducible.\n Recently, there were efforts towards the global mapping of PV farms, but these were limited to fixed time periods of the analyzed satellite imagery and not openly reproducible.\n To resolve this limitation we implemented the detection workflow for mapping solar farms using Sentinel-2 imagery in an openEO process.\n Open-source data is used to construct the training dataset, leveraging OpenStreetMap (OSM) to gather PV farms polygons across different countries.\n Different filtering techniques are involved in the creation of the training set, in particular land cover and terrain.\n To ensure model robustness, we leveraged the temporal resolution of Sentinel-2 L2A data and utilized openEO to create a reusable workflow that simplifies the data access in the cloud, allowing the collection of training samples over Europe efficiently.\n This workflow includes preprocessing steps such as cloud masking, gap filling, outliers filtering as well as feature extraction.\n Alot of effort is put in the best training samples generation, ensuring an optimal starting point for the subsequent steps.\n After compiling the training dataset, we conducted a statistical discrimination analysis of different pixel-level models to determine the most effective one.\n Our goal is to compare time-series machine learning (ML) models like InceptionTime, which uses 3D data as input, with tree-based models like Random Forest (RF), which employs 2D data along with feature engineering.\n An openEO process graph was constructed for the execution of the inference phase, encapsulating all necessary processes from the preprocessing to the prediction stage.\n The UDP process for the PV farms mapping is integrated with the ESA Green Transition Information Factory (GTIF, https://gtif.esa.int/), providing the ability for streamlined and FAIR compliant updates of related energy infrastructure mapping efforts.\n How to cite: Alasawedah, M., Claus, M., Jacob, A., Griffiths, P., Dries, J., and Lippens, S.: Photovoltaic Farms Mapping using openEO Platform, EGU General Assembly 2024, Vienna, Austria, 14–19 Apr 2024, EGU24-16841, https://doi.org/10.5194/egusphere-egu24-16841, 2024.\n For more information please visit: https://github.com/clausmichele/openEO_photovoltaic/tree/main",
13+
"description": "An openEO process developed by EURAC to detect photovoltaic farms, based on sentinel2 data.",
1414
"cost_estimate": 0.001,
1515
"cost_unit": "platform credits per km\u00b2",
1616
"keywords": [

algorithm_catalog/vito/max_ndvi_composite/openeo_udp/max_ndvi_composite.json

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -379,7 +379,7 @@
379379
},
380380
"id": "max_ndvi_composite",
381381
"summary": "Max NDVI composite at 10m resolution.",
382-
"description": "# Description\n\nThis algorithm generates a Sentinel-2 based composite for a selected area and temporal extent. By default, the resolution of the output is 10 meters.\n\nThe used compositing method is the \"max-NDVI\" method, which selects the pixel with the highest NDVI value for each pixel location and within the time window.\nThe method falls under the 'rank composite' category, and ensures that selected spectral band values for any individual pixel all come from the same observation.\n\nThe method generates good results for timeseries analytics, but spatially neighbouring pixels may be selected from different observations,\nwhich leads to visual discontinuities in the result.\n\n# Performance characteristics\n\nThe method is computationally efficient, as it only requires the B04, B08 and SCL bands to determine the rank score. Loading\nof other bands can be minimized to read only selected observations.\n\n\n# Examples\n\nThe image below shows a typical result over an agricultural area.\n\n![max_ndvi_example.png](./max_ndvi_example.png)\n\nThe examples below show typical resource usage figures. They illustrate that the cost varies as a function of the parameters,\nand most importantly that it is not possible to linearly extrapolate the cost from one example to another.\n\n\n## 3-month composite over Denmark\n\nA complete example including STAC metadata is shown here:\n\nhttps://radiantearth.github.io/stac-browser/#/external/s3.waw3-1.cloudferro.com/swift/v1/APEx-examples/max_ndvi_denmark/collection.json\n\nThe processing platform reported these usage statistics for the example:\n\n```\nCredits: 63\nCPU usage: 47.743,722 cpu-seconds\nWall time: 1.948 seconds\nInput Pixel 10.997,635 mega-pixel\nMax Executor Memory: 3,239 gb\nMemory usage: 154.767.121,977 mb-seconds\nNetwork Received: 1.677.537.930.040 b\n```\n\nThe relative cost is 1 CDSE platform credits per km² for a 3 month input window.\nThe cost per input pixel is 0.0057 credits per megapixel.\n\n## 15-month composite over Denmark\n\nIn a second example, a longer compositing window was tested, generating a 3-band result. Here we see a lower cost per km², but a similar cost per input\npixel.\n\n```\nCredits: 189\nCPU usage: 77.621,979 cpu-seconds\nWall time: 5.499 seconds\nInput Pixel: 31.494,448 mega-pixel\nMax Executor Memory: 4,332 gb\nMemory usage: 564.094.942,143 mb-seconds\nNetwork Received: 872.636.866.126 b\n```\n\nThe relative cost is 0.03 CDSE platform credits per km² for a 15 month input window.\nThe cost per input pixel is 0.006 credits per megapixel.\n\n# Literature references\n\nThe max-ndvi compositing method has been applied to multiple sensors, as described in literature:\n\nThis publication describes characteristics of the method when applied to AVHRR data:\nhttps://www.tandfonline.com/doi/abs/10.1080/01431168608948945\n\nThis publication applied it to Landsat data, for cropland estimation:\nhttps://www.nature.com/articles/s43016-021-00429-z\n\n# Known limitations\n\nThe method uses a vegetation index as scoring metric to determine the best pixel, making it only suitable for land applications.\nBare or urban areas may not be well represented in the composite.\n\nIt favours the observation which is least contaminated by atmospheric effects, but does not guarantee a fully uncontaminated composite.\n\nFor individual time windows of up to 3 months, the method was efficient up to 100x100km areas. For larger areas of interest, we recommend splitting the area into smaller tiles.\n\n\n# Known artifacts\n\nArtifacts are expected over water and urban areas.\n\n![max_ndvi_water_artifacts.png](./max_ndvi_water_artifacts.png)\n\nResidual cloud artifacts may be present in the composite, especially for smaller time windows or during cloudy seasons.\nThe cloud artifacts are caused by the limited capabilities of the default Sentinel-2 cloud detection mechanism to correctly identify all clouds.\n\n![max_ndvi_cloud_artifacts.png](./max_ndvi_cloud_artifacts.png)",
382+
"description": "# Description\n\nThis algorithm generates a Sentinel-2 based composite for a selected area and temporal extent. By default, the resolution of the output is 10 meters.\n\nThe used compositing method is the \"max-NDVI\" method, which selects the pixel with the highest NDVI value for each pixel location and within the time window.\nThe method falls under the 'rank composite' category, and ensures that selected spectral band values for any individual pixel all come from the same observation.\n\nThe method generates good results for timeseries analytics, but spatially neighbouring pixels may be selected from different observations,\nwhich leads to visual discontinuities in the result.\n\n# Performance characteristics\n\nThe method is computationally efficient, as it only requires the B04, B08 and SCL bands to determine the rank score. Loading\nof other bands can be minimized to read only selected observations.\n\n\n# Examples\n\nThe image below shows a typical result over an agricultural area.\n\n![max_ndvi_example.png](/max_ndvi_example.png)\n\nThe examples below show typical resource usage figures. They illustrate that the cost varies as a function of the parameters,\nand most importantly that it is not possible to linearly extrapolate the cost from one example to another.\n\n\n## 3-month composite over Denmark\n\nA complete example including STAC metadata is shown here:\n\nhttps://radiantearth.github.io/stac-browser/#/external/s3.waw3-1.cloudferro.com/swift/v1/APEx-examples/max_ndvi_denmark/collection.json\n\nThe processing platform reported these usage statistics for the example:\n\n```\nCredits: 63\nCPU usage: 47.743,722 cpu-seconds\nWall time: 1.948 seconds\nInput Pixel 10.997,635 mega-pixel\nMax Executor Memory: 3,239 gb\nMemory usage: 154.767.121,977 mb-seconds\nNetwork Received: 1.677.537.930.040 b\n```\n\nThe relative cost is 1 CDSE platform credits per km² for a 3 month input window.\nThe cost per input pixel is 0.0057 credits per megapixel.\n\n## 15-month composite over Denmark\n\nIn a second example, a longer compositing window was tested, generating a 3-band result. Here we see a lower cost per km², but a similar cost per input\npixel.\n\n```\nCredits: 189\nCPU usage: 77.621,979 cpu-seconds\nWall time: 5.499 seconds\nInput Pixel: 31.494,448 mega-pixel\nMax Executor Memory: 4,332 gb\nMemory usage: 564.094.942,143 mb-seconds\nNetwork Received: 872.636.866.126 b\n```\n\nThe relative cost is 0.03 CDSE platform credits per km² for a 15 month input window.\nThe cost per input pixel is 0.006 credits per megapixel.\n\n# Literature references\n\nThe max-ndvi compositing method has been applied to multiple sensors, as described in literature:\n\nThis publication describes characteristics of the method when applied to AVHRR data:\nhttps://www.tandfonline.com/doi/abs/10.1080/01431168608948945\n\nThis publication applied it to Landsat data, for cropland estimation:\nhttps://www.nature.com/articles/s43016-021-00429-z\n\n# Known limitations\n\nThe method uses a vegetation index as scoring metric to determine the best pixel, making it only suitable for land applications.\nBare or urban areas may not be well represented in the composite.\n\nIt favours the observation which is least contaminated by atmospheric effects, but does not guarantee a fully uncontaminated composite.\n\nFor individual time windows of up to 3 months, the method was efficient up to 100x100km areas. For larger areas of interest, we recommend splitting the area into smaller tiles.\n\n\n# Known artifacts\n\nArtifacts are expected over water and urban areas.\n\n![max_ndvi_water_artifacts.png](./max_ndvi_water_artifacts.png)\n\nResidual cloud artifacts may be present in the composite, especially for smaller time windows or during cloudy seasons.\nThe cloud artifacts are caused by the limited capabilities of the default Sentinel-2 cloud detection mechanism to correctly identify all clouds.\n\n![max_ndvi_cloud_artifacts.png](./max_ndvi_cloud_artifacts.png)",
383383
"parameters": [
384384
{
385385
"name": "spatial_extent",

algorithm_catalog/vito/variabilitymap/records/variabilitymap.json

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -9,8 +9,8 @@
99
"created": "2025-01-17T16:32:27Z",
1010
"updated": "2025-01-17T16:32:27Z",
1111
"type": "apex_algorithm",
12-
"title": "Daily crop performance calculation",
13-
"description": "Variability maps show the spatial variation in crop performance within a field on a given date. These variations can stem from differences in soil type, hydrology, pests, diseases, or extreme weather events like drought, hail, storms, or floods.\n\nA farmer can use these variability maps to check for anomalies, or they can be used as input for variable-rate fertilization or irrigation to adjust the dose of fertilizer or water according to the spatial variation within the field.\nThe base index for calculating the variability maps is fAPAR, the fraction of absorbed photosynthetically active radiation derived from Sentinel-2 satellite images with a spatial resolution of 10m. For each cloud-free satellite image, we compare each pixel's fAPAR value to the field's median fAPAR value (pixel values are expressed as % of the median). The result is a GeoTIFF image showing the deviations.\n\n![Variability Map - Average deviations](https://artifactory.vgt.vito.be:443/auxdata-public/Nextland/services/descriptions/yieldpotentialmap/yieldmap_raw.png)\n\n*Example of a variability map (single date)*\n\nFinally, the deviations are classified into five categories according to their relevance, and color maps are generated.\n\n| Range | Class | Color |\n|----------|-------|-----------------|\n| <85% | 1 | red |\n| 85-95% | 2 | orange |\n| 95-105% | 3 | light green |\n| 105-115% | 4 | dark green |\n| >115% | 5 | darkest green |\n\n\n\nIn the red and orange zones, lower fAPAR values are found, while in the green and dark green zones, the fAPAR values are (much) higher than the median value. It is assumed that the crop performs better in the dark green zones than in the orange and red zones.\n\n\n![Variability Map - Categorized](https://artifactory.vgt.vito.be:443/auxdata-public/Nextland/services/descriptions/yieldpotentialmap/yieldmap_categories.png)\n![Variability Map - Legend](https://artifactory.vgt.vito.be:443/auxdata-public/Nextland/services/descriptions/yieldpotentialmap/yieldmap_legend.png)\n\n*Example of a variability color map (deviations classified into five categories)*\n",
12+
"title": "Variability maps",
13+
"description": "Daily crop performance calculation",
1414
"keywords": [
1515
"Variabilitymap",
1616
"nextland services",

0 commit comments

Comments
 (0)