Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
The table of contents is too big for display.
Diff view
Diff view
  •  
  •  
  •  
57 changes: 57 additions & 0 deletions .github/workflows/resize-images.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,57 @@
name: Resize Images
# A weekly run to resize images that changed in the last week

on:
schedule:
- cron: "0 9 * * 1" # every Monday at 09:00 UTC
workflow_dispatch:

jobs:
resize-images:
runs-on: ubuntu-latest
steps:
- name: Checkout repository
uses: actions/checkout@v4
with:
fetch-depth: 0 # Fetch full history to compare with last week

- name: Get changed image files from last week
id: changed-files
run: |
# Find all image files that changed since last Monday
CHANGED_IMAGES=$(git log --since="7 days ago" --name-only --pretty="" \
-- '*.jpg' '*.jpeg' '*.png' | sort -u | tr '\n' ' ')

echo "changed_images=$CHANGED_IMAGES" >> $GITHUB_OUTPUT
echo "Changed images: $CHANGED_IMAGES"

# Set a flag if any images were changed
if [ -n "$CHANGED_IMAGES" ]; then
echo "has_changes=true" >> $GITHUB_OUTPUT
else
echo "has_changes=false" >> $GITHUB_OUTPUT
fi

- name: Install ImageMagick
run: sudo apt-get update && sudo apt-get install -y imagemagick

- name: Run tools/resize_images.sh on changed files
if: steps.changed-files.outputs.has_changes == 'true'
run: |
# Pass the changed image files to the resize script
bash tools/resize_images.sh ${{ steps.changed-files.outputs.changed_images }}
- name: Create Pull Request
uses: peter-evans/create-pull-request@v6
if: steps.changed-files.outputs.has_changes == 'true' && success()
with:
commit-message: Resize images changed in the last week
title: Resize images changed in the last week
body: |
Resize images that were modified in the last week (Monday to Monday)

Changed files: ${{ steps.changed-files.outputs.changed_images }}

Triggered by workflow run ${{ github.server_url }}/${{ github.repository }}/actions/runs/${{ github.run_id }}
Auto-generated by create-pull-request: https://github.com/peter-evans/create-pull-request
branch: resize-images
base: main
Binary file removed content/install-guides/_images/aperf.png
Binary file not shown.
Binary file added content/install-guides/_images/aperf.webp
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file removed content/install-guides/_images/aperf0.png
Binary file not shown.
Binary file added content/install-guides/_images/aperf0.webp
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file not shown.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file not shown.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
4 changes: 2 additions & 2 deletions content/install-guides/aperf.md
Original file line number Diff line number Diff line change
Expand Up @@ -114,7 +114,7 @@ There are a number of tabs on the left side showing the collected data.

You can browse the data and see what has been collected.

![APerf #center](/install-guides/_images/aperf0.png)
![APerf #center](/install-guides/_images/aperf0.webp)

{{% notice Note %}}
The Kernel Config and Sysctl Data tabs are blank unless you click No.
Expand Down Expand Up @@ -142,7 +142,7 @@ Open the `index.html` file in the `compare/` directory to see the 2 runs side by

A screenshot is shown below:

![APerf #center](/install-guides/_images/aperf.png)
![APerf #center](/install-guides/_images/aperf.webp)

### How do I use an HTTP server to view reports?

Expand Down
2 changes: 1 addition & 1 deletion content/install-guides/arduino-pico.md
Original file line number Diff line number Diff line change
Expand Up @@ -40,7 +40,7 @@ From the menu select `Tools -> Board -> Boards Manager`.

When the `Boards Manager` opens search for `pico` and the `Arduino Mbed OS RP2040 Boards` will be displayed. Click the `Install` button to add it to the Arduino IDE.

![Arduino Board Manager](/install-guides/_images/arduino_rp2040_boards.png)
![Arduino Board Manager](/install-guides/_images/arduino_rp2040_boards.webp)

### How do I set up the Raspberry Pi Pico W?

Expand Down
2 changes: 1 addition & 1 deletion content/install-guides/windows-perf-vs-extension.md
Original file line number Diff line number Diff line change
Expand Up @@ -109,7 +109,7 @@ The WindowsPerf extension is composed of several key features, each designed to

The sampling interface is shown below:

![Sampling preview #center](/install-guides/_images/wperf-vs-extension-sampling-preview.png)
![Sampling preview #center](/install-guides/_images/wperf-vs-extension-sampling-preview.webp)

* Counting Settings UI: Build a `wperf stat` command from scratch using the configuration interface, then view the output in VS Code or open it with Windows Performance Analyzer (WPA).
The interface to configure counting is shown below:
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -150,7 +150,7 @@ Once the script starts successfully, you will see a similar output to the image

You can use your browser to monitor the simulation data in real-time.

![img1 alt-text#center](vnc_address.png "Figure 1: Execute run.sh")
![img1 alt-text#center](vnc_address.webp "Figure 1: Execute run.sh")

Now you can use the browser to access visualization.
In this example the URL is http://34.244.98.151:6080/vnc.html
Expand Down
Binary file not shown.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file not shown.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Original file line number Diff line number Diff line change
Expand Up @@ -91,7 +91,7 @@ Run the program on both systems:

For easy comparison, the image below shows the x86 output (left) and Arm output (right). The highlighted lines show the difference in output:

![differences](./differences.png)
![differences](./differences.webp)

As you can see, there are several cases where different behavior is observed in these undefined scenarios. For example, when trying to convert a signed number to an unsigned number or dealing with out-of-bounds values.

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -26,7 +26,7 @@ Create a repository in your GitLab account by clicking the "+" sign on top-left

After you create the repository, navigate to `Settings->CI/CD` in the left-hand pane. Expand the `Runners` section and under `Project Runners`, select `New Project Runner`.

![arm64-runner #center](_images/create-gitlab-runner.png)
![arm64-runner #center](_images/create-gitlab-runner.webp)

Use `Tags` to specify the jobs that can be executed on the runner. In the `Tags` field, enter `arm64`. In `Runner description` enter `google-axion-arm64-runner` and click the `Create Runner` button

Expand Down Expand Up @@ -69,7 +69,7 @@ Runner registered successfully. Feel free to start it, but if it's running alrea

You should see the newly registered runner in the Runners section of the GitLab console as shown below.

![registered-runner #center](_images/registered-runner.png)
![registered-runner #center](_images/registered-runner.webp)

To create an `amd64` GitLab runner, follow the same steps as above, except for the `Download binaries` section. Change the download url to `https://gitlab-runner-downloads.s3.amazonaws.com/latest/binaries/gitlab-runner-linux-amd64`

Expand Down
Binary file not shown.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file not shown.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file not shown.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Original file line number Diff line number Diff line change
Expand Up @@ -27,7 +27,7 @@ During inference, such as when trying to generate the next *token* or *word* wit

For example, in the image below, *z1* is calculated as a dot product of connected *x*s and *w*s from the previous layer. A matrix multiplication operation can therefore efficiently calculate all *z* values in Layer 0.

![Neural Network example#center](neural-node-pic.jpg "Zoomed-in neural network node.")
![Neural Network example#center](neural-node-pic.webp "Zoomed-in neural network node.")


In addition to *weights*, each neuron in a neural network is assigned a *bias*. These weights and biases are learned during training, and make up a model's parameters. For example, in the Llama 3 model with 8 billion parameters, the model has around 8 billion individual weights and biases that embody what the model learned during training. Generally speaking, the higher the number of parameters a model has, the more information it can retain from its training, which increases its performance capability. For more information about Llama 3 view its [Hugging Face model card](https://huggingface.co/meta-llama/Meta-Llama-3-8B).
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -17,7 +17,7 @@ To access Compiler Explorer, open a browser and go to https://godbolt.org.

This leads you to the page shown below in Figure 1. Your view might be slightly different.

![godbolt open alt-text#center](open.png "Figure 1. Compiler Explorer")
![godbolt open alt-text#center](open.webp "Figure 1. Compiler Explorer")

The left side of the page contains the source code. In Figure 1, the language is set to C++, but you can click on the programming language to select a different language for the source code.

Expand Down
Binary file not shown.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Original file line number Diff line number Diff line change
Expand Up @@ -72,7 +72,7 @@ Make sure to replace 'x' with the version number of Python that you have install

After running the code, you will see output similar to Figure 5:

![image alt-text#center](figures/01.png "Figure 5. Output")
![image alt-text#center](figures/01.webp "Figure 5. Output")

## Train the Model

Expand Down Expand Up @@ -134,7 +134,7 @@ for t in range(epochs):

After running the code, you see the following output showing the training progress, as displayed in Figure 2.

![image alt-text#center](figures/02.png "Figure 2. Output 2")
![image alt-text#center](figures/02.webp "Figure 2. Output 2")

Once the training is complete, you see output similar to:

Expand Down
Binary file not shown.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file not shown.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file not shown.
Diff not rendered.
Diff not rendered.
Diff not rendered.
Original file line number Diff line number Diff line change
Expand Up @@ -108,7 +108,7 @@ This code demonstrates how to use a saved PyTorch model for inference and visual

After running the code, you should see results similar to the following figure:

![image](figures/03.png "Figure 6. Results Displayed")
![image](figures/03.webp "Figure 6. Results Displayed")

### What have you learned?

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -117,7 +117,7 @@ To ensure everything is set up correctly, follow these next steps:

4. Select the Python kernel you created earlier, `pytorch-env`. To do so, click **Kernels** in the top right-hand corner. Then, click **Jupyter Kernel...**, and you will see the Python kernel as shown below:

![img1 alt-text#center](figures/1.png "Figure 1: Python kernel.")
![img1 alt-text#center](figures/1.webp "Figure 1: Python kernel.")

5. In your Jupyter notebook, run the following code to verify PyTorch is working correctly:

Expand All @@ -127,6 +127,6 @@ print(torch.__version__)
```

It will look as follows:
![img2 alt-text#center](figures/2.png "Figure 2: Jupyter Notebook.")
![img2 alt-text#center](figures/2.webp "Figure 2: Jupyter Notebook.")

Now you have set up your development environment, you can move on to creating a PyTorch model.
Original file line number Diff line number Diff line change
Expand Up @@ -94,7 +94,7 @@ summary(model, (1, 28, 28))

After running the notebook, you will see the output as shown in Figure 4:

![img4 alt-text#center](figures/4.png "Figure 4: Notebook Output.")
![img4 alt-text#center](figures/4.webp "Figure 4: Notebook Output.")

You will see a detailed summary of the NeuralNetwork model's architecture, including the following information:

Expand Down
Diff not rendered.
Diff not rendered.
Diff not rendered.
Diff not rendered.
Diff not rendered.
Diff not rendered.
Diff not rendered.
Diff not rendered.
Original file line number Diff line number Diff line change
Expand Up @@ -246,7 +246,7 @@ If successful, you should see the LED on your board light up. If you wave your h

You can further check that your code is running properly by opening the `Serial Monitor` from the `Tools` menu of the Arduino IDE. There you should see all of the output messages, including count of detected motion events, coming from your sketch.

![Debug output](_images/output.png)
![Debug output](_images/output.webp)

Congratulations! You have successfully programmed your microcontroller and built a working, if simple, smart device.

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -39,39 +39,39 @@ If you're not familiar with a breadboard, the image above shows you how all of t

### Step 1: Seat your Raspberry Pi

![RaspberryPi Pico](_images/pico_on_breadboard.png)
![RaspberryPi Pico](_images/pico_on_breadboard.webp)

Seat your Raspberry Pi Pico on the breadboard so that its rows of pins sit on either side of the center divider. Make sure that it's firmly pressed all the way down but be careful not to bend any of the pins.

### Step 2: PIR ground

![PIR ground](_images/pir_sensor_1.png)
![PIR ground](_images/pir_sensor_1.webp)

Using a black jumper wire, connect the ground pin of your PIR sensor to pin #38 on your Pico. This pin is a ground voltage pin on the Pico.

### Step 3: PIR input voltage

![PIR voltage](_images/pir_sensor_2.png)
![PIR voltage](_images/pir_sensor_2.webp)

Using a red wire, connect the input voltage pin of your PIR sensor to pin #36 on your Pico. This pin is a 3.3 volt pin on the Pico and will supply power to your PIR sensor.

### Step 4: PIR data

![PIR data](_images/pir_sensor_3.png)
![PIR data](_images/pir_sensor_3.webp)

The last step to connecting the PIR sensor is to connect the middle data pin to pin #34 on your Pico. This is a GPIO pin that you can use to either read or write data.

Note that this is GPIO #28, even though it's physical pin #34. Physical pin number and GPIO numbers are not the same.

### Step 5: Buzzer ground

![Buzzer ground](_images/piezo_1.png)
![Buzzer ground](_images/piezo_1.webp)

Next, it's time to connect the buzzer. Start by connecting the buzzer's ground pin to pin #23 on your Pico. This is another ground pin that is build into your board.

### Step 6: Buzzer input

![Buzzer input](_images/piezo_2.png)
![Buzzer input](_images/piezo_2.webp)

Then, connect the buzzer's input pin to pin #25 on your Pico. This is another GPIO pin, this time GPIO #19.

Expand Down
Diff not rendered.
Diff not rendered.
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,7 @@ This section provides hands-on instructions for you to deploy pre-trained Paddle

The steps involved in the model deployment are shown in the figure below:

![End-to-end workflow#center](./Figure3.png "Figure 3. End-to-end workflow")
![End-to-end workflow#center](./Figure3.webp "Figure 3. End-to-end workflow")

## Deploy PaddleOCR text recognition model on the Corstone-300 FVP included with Arm Virtual Hardware

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -37,7 +37,7 @@ You will need the following components:
- **Anode (long leg) of the LED** → connect to **GPIO pin D2** through a 220 Ω resistor
- **Cathode (short leg)** → connect to **GND**

![Diagram showing the physical breadboard circuit connecting an LED to GPIO D2 and GND on the Arduino Nano RP2040 alt-text#center](images/led_connection.png)
![Diagram showing the physical breadboard circuit connecting an LED to GPIO D2 and GND on the Arduino Nano RP2040 alt-text#center](images/led_connection.webp)

![Schematic diagram showing the LED connected between GPIO D2 and GND with a 220 Ω resistor in series alt-text#center](images/led_connection_schematic.png)

Expand Down
Diff not rendered.
Diff not rendered.
Diff not rendered.
Diff not rendered.
Diff not rendered.
Diff not rendered.
Diff not rendered.
Diff not rendered.
Diff not rendered.
Diff not rendered.
Diff not rendered.
Diff not rendered.
Diff not rendered.
Diff not rendered.
Diff not rendered.
Diff not rendered.
Diff not rendered.
Original file line number Diff line number Diff line change
Expand Up @@ -48,7 +48,7 @@ In the following sections, you'll walk through each key page on the Edge Impulse



![Screenshot of the Edge Impulse home page showing the main navigation and project dashboard alt-text#center](images/1.png "Home page of Edge Impulse website")
![Screenshot of the Edge Impulse home page showing the main navigation and project dashboard alt-text#center](images/1.webp "Home page of Edge Impulse website")


## Create a new project
Expand All @@ -59,7 +59,7 @@ For example, if you're building a keyword-spotting model, you might name it `Wak

You'll also need to select the appropriate **project type** and **project settings**, as shown in the screenshot below.

![Screenshot showing the new project creation page in Edge Impulse, with fields for project name, type, and target device alt-text#center](images/3.png "New project setup")
![Screenshot showing the new project creation page in Edge Impulse, with fields for project name, type, and target device alt-text#center](images/3.webp "New project setup")

## Configure the target device

Expand All @@ -69,7 +69,7 @@ You can find the full specifications for the Arduino Nano RP2040 Connect on [Ard

Follow the settings shown in the screenshot to complete the configuration.

![Screenshot showing the Edge Impulse device configuration page with Arduino Nano RP2040 Connect selected alt-text#center](images/4.png "Configure Arduino Nano RP2040")
![Screenshot showing the Edge Impulse device configuration page with Arduino Nano RP2040 Connect selected alt-text#center](images/4.webp "Configure Arduino Nano RP2040")


## Add the dataset
Expand All @@ -84,14 +84,14 @@ git clone https://github.com/e-dudzi/Learning-Path.git

The repository contains a `Dataset.zip` file with the dataset used in this project. Extract the contents to your local machine. For convenience, the dataset is already split into **training** and **testing** sets.

![Screenshot showing the Edge Impulse interface with the Add existing data panel open, used to upload pre-recorded datasets alt-text#center](images/6.png "Adding existing data")
![Screenshot showing the Edge Impulse interface with the Add existing data panel open, used to upload pre-recorded datasets alt-text#center](images/6.webp "Adding existing data")


{{% notice Note %}}
Do not check the green highlighted area during upload. The dataset already includes metadata. Enabling that option may result in much slower upload times and is unnecessary for this project.
{{% /notice %}}

![Screenshot showing the Data acquisition tab in Edge Impulse with uploaded samples organized by label alt-text#center](images/7.png "Dataset overview")
![Screenshot showing the Data acquisition tab in Edge Impulse with uploaded samples organized by label alt-text#center](images/7.webp "Dataset overview")

## Dataset uploaded successfully

Expand All @@ -106,7 +106,7 @@ This dataset is consists of four labels:
- unknown
{{% /notice %}}

![Screenshot showing the Impulse design interface in Edge Impulse with input, processing, and learning blocks configured alt-text#center](images/8.png "Dataset overview")
![Screenshot showing the Impulse design interface in Edge Impulse with input, processing, and learning blocks configured alt-text#center](images/8.webp "Dataset overview")

## Create the impulse

Expand All @@ -117,7 +117,7 @@ Click **Create impulse** in the menu and configure it as shown in the screenshot
After configuring the impulse, make sure to **save your changes**.


![example image alt-text#center](images/9.png "Create Impulse")
![example image alt-text#center](images/9.webp "Create Impulse")

## Configure the MFCC block

Expand All @@ -129,7 +129,7 @@ Set the parameters exactly as shown in the screenshot. These settings determine

These defaults are chosen for this Learning Path, but you can experiment with different values once you're more familiar with Edge Impulse.

![Screenshot showing the MFCC configuration page in Edge Impulse with time and frequency parameters set for feature extraction alt-text#center](images/10.png "MFCC block configuration")
![Screenshot showing the MFCC configuration page in Edge Impulse with time and frequency parameters set for feature extraction alt-text#center](images/10.webp "MFCC block configuration")

{{< notice Note >}}
The green-highlighted section on the MFCC configuration page provides an estimate of how the model will perform on the target device. This includes memory usage (RAM and flash) and latency, helping ensure the model fits within hardware constraints.
Expand All @@ -141,7 +141,7 @@ After saving the MFCC parameters, the next step is to generate features from you

When complete, you'll see a **2D feature plot** that shows how the data is distributed across the four labels: `on`, `off`, `noise`, and `unknown`. This helps visually confirm whether the classes are distinct and learnable.

![Screenshot showing the feature explorer in Edge Impulse with a 2D visualization of four labeled audio classes alt-text#center](images/12.png "Feature explorer")
![Screenshot showing the feature explorer in Edge Impulse with a 2D visualization of four labeled audio classes alt-text#center](images/12.webp "Feature explorer")

## Set up the classifier

Expand All @@ -153,7 +153,7 @@ For this Learning Path, use a learning rate of `0.002` even though the screensho

Once all parameters are set, click **Save and train** to begin training your model.

![Screenshot showing the classifier configuration screen in Edge Impulse with neural network settings for audio classification alt-text#center](images/13.png)
![Screenshot showing the classifier configuration screen in Edge Impulse with neural network settings for audio classification alt-text#center](images/13.webp)

## Review model performance

Expand Down Expand Up @@ -189,7 +189,7 @@ To run the trained model on your Arduino Nano RP2040 Connect, export it as an Ar

The model will be downloaded as a `.zip` file, which you can import into the Arduino IDE.

![Screenshot showing the Edge Impulse deployment page with Arduino library export selected alt-text#center](images/16.png)
![Screenshot showing the Edge Impulse deployment page with Arduino library export selected alt-text#center](images/16.webp)

## Next steps

Expand Down
Diff not rendered.
Diff not rendered.
Diff not rendered.
Diff not rendered.
Diff not rendered.
Diff not rendered.
Diff not rendered.
Diff not rendered.
Diff not rendered.
Diff not rendered.
Diff not rendered.
Original file line number Diff line number Diff line change
Expand Up @@ -37,7 +37,7 @@ If you are unsure of how to insert the ribbon into the connector [watch the vide
You can use a USB camera instead for object detection, but the instructions below assume a MIPI CSI-2 camera.
{{% /notice %}}

![image of the ribbon inserted into the connector](./cam0connector.jpg)
![image of the ribbon inserted into the connector](./cam0connector.webp)

### Power on the Jetson Orin Nano

Expand Down
Diff not rendered.
Diff not rendered.
Diff not rendered.
Diff not rendered.
Diff not rendered.
Loading
Loading