Skip to content

Commit 3341d26

Browse files
committed
add lightboxes and update image
1 parent 6016d06 commit 3341d26

File tree

2 files changed

+3
-3
lines changed

2 files changed

+3
-3
lines changed
-3.12 KB
Loading

articles/orbital/sar-reference-architecture.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -24,7 +24,7 @@ container, and then run at scale. While the performance of processing a given im
2424
pipeline that utilizes vendor provided binaries and/or open-source software. While processing of any individual file or image won't occur any faster, many files can be processed simultaneously in parallel. With the flexibility of AKS, each step in the pipeline can execute on the hardware best suited for the tool, for example, GPU, high core
2525
count, or increased memory.
2626

27-
:::image type="content" source="media/aks-argo-diagram.png" alt-text="Diagram of AKS and Argo Workflows.":::
27+
:::image type="content" source="media/aks-argo-diagram.png" alt-text="Diagram of AKS and Argo Workflows." lightbox="media/aks-argo-diagram.png":::
2828

2929
Raw products are received by a ground station application, which, in turn, writes the data into Azure Blob Storage. Using an Azure Event Grid subscription, a notification is supplied to Azure Event Hubs when a new product image is written to blob storage. Argo Events, running on Azure Kubernetes Service, subscribes to the Azure Event Hubs notification and upon receipt of the event, triggers an Argo Workflows workflow to process the image.
3030

@@ -38,7 +38,7 @@ With Azure Kubernetes Service, this typically involves creating node pools with
3838
The approach to use Azure Synapse is slightly different than a normal pipeline. Typically, lots of data processing firms already have algorithms that are processing the data. They may not want to rewrite the algorithms that are already written but they may need a way to scale those algorithms horizontally. What we are showing here's an approach using which they can easily run their code on distributed
3939
framework like Apache Spark and not have to worry about dealing with all the complexities one would when working with Distributed system. We're taking advantage of vectorization and SIMD architecture where we're processing more than one row at a time instead of processing one row at a time. These features are specific to Apache Spark DataFrame and JVM
4040

41-
:::image type="content" source="media/azure-synapse-processing.png" alt-text="Diagram of data processing using Azure Synapse.":::
41+
:::image type="content" source="media/azure-synapse-processing.png" alt-text="Diagram of data processing using Azure Synapse." lightbox="media/azure-synapse-processing.png":::
4242

4343
## Data ingestion
4444

@@ -52,7 +52,7 @@ Remote Sensing Data is sent to a ground station. The ground station app collects
5252

5353
Under this approach using Apache Spark, we're gluing the library that has algorithms with JNA. JNA requires you to define the interfaces for your native code and does the heavy lifting to converting your data to and from the native library to usable Java Types. Now without any major rewriting, we can distribute the computation of data on nodes vs a single machine. Typical Spark Execution under this model looks like as follows.
5454

55-
:::image type="content" source="media/spark-execution-model.png" alt-text="Diagram of the Spark execution model.":::
55+
:::image type="content" source="media/spark-execution-model.png" alt-text="Diagram of the Spark execution model." lightbox="media/spark-execution-model.png":::
5656

5757
## Considerations
5858

0 commit comments

Comments
 (0)