Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
47 commits
Select commit Hold shift + click to select a range
dca085e
Deploy Envoy on Google Axion C4A Arm virtual machine
odidev Aug 25, 2025
7a22274
Update to ALP: Build an Android chat app with Llama, KleidiAI, ExecuT…
Sep 1, 2025
4e152ed
Tech review of MongoDB for GCP
annietllnd Sep 1, 2025
4255ad8
Update baseline-testing.md
annietllnd Sep 1, 2025
dcc9bc1
Merge pull request #2270 from annietllnd/main
pareenaverma Sep 2, 2025
a8c2dd1
Merge pull request #2269 from amalaugustinejose/build-llama3-chat-and…
pareenaverma Sep 2, 2025
21e19ab
Fix skillevels across content
annietllnd Sep 3, 2025
d0308de
Merge pull request #2274 from annietllnd/main
pareenaverma Sep 3, 2025
c73ed9c
Add a line numbering start attribute.
Arnaud-de-Grandmaison-ARM Sep 4, 2025
dff9fb9
Index update
madeline-underwood Sep 4, 2025
8685f09
Updated baseline
madeline-underwood Sep 4, 2025
c83029d
Content dev
madeline-underwood Sep 4, 2025
11afffd
Content dev
madeline-underwood Sep 4, 2025
5525485
Merge pull request #2276 from Arnaud-de-Grandmaison-ARM/line-numberin…
pareenaverma Sep 4, 2025
b608ba3
Merge branch 'ArmDeveloperEcosystem:main' into tune_network_workloads
madeline-underwood Sep 4, 2025
eac2812
Final tweaks
madeline-underwood Sep 4, 2025
19dccf9
Content dev
madeline-underwood Sep 5, 2025
3ba1bd3
Content development
madeline-underwood Sep 5, 2025
82e59f9
Fix verify_index_fields.py so it splits input on yaml's '---'
Arnaud-de-Grandmaison-ARM Sep 5, 2025
715c135
Merge pull request #2279 from madeline-underwood/tune_network_workloads
pareenaverma Sep 5, 2025
a1d1f07
Merge pull request #2285 from Arnaud-de-Grandmaison-ARM/verify-index-…
pareenaverma Sep 5, 2025
1f04511
Content dev
madeline-underwood Sep 5, 2025
4a422d3
Content dev
madeline-underwood Sep 5, 2025
afb9375
Final tweaks
madeline-underwood Sep 5, 2025
fdaac51
Merge pull request #2288 from madeline-underwood/github_actions
pareenaverma Sep 5, 2025
1998ca5
Update _index.md
pareenaverma Sep 5, 2025
50e97ee
Merge pull request #2256 from odidev/envoy_LP
pareenaverma Sep 5, 2025
6be58c3
Update .wordlist.txt
pareenaverma Sep 8, 2025
8533fef
Merge pull request #2289 from pareenaverma/content_review
pareenaverma Sep 8, 2025
64b9553
typo fixes
pareenaverma Sep 8, 2025
37859ed
Content development review
madeline-underwood Sep 8, 2025
2ed9de0
Merge branch 'content_review' of https://github.com/pareenaverma/arm-…
pareenaverma Sep 8, 2025
f972632
Type fixes
pareenaverma Sep 8, 2025
f797021
Merge pull request #2290 from pareenaverma/content_review
pareenaverma Sep 8, 2025
74e74cf
Tweaks
madeline-underwood Sep 8, 2025
f4da3c0
Tech review of Envoy LP
pareenaverma Sep 8, 2025
606e6cd
Merge pull request #2292 from pareenaverma/content_review
pareenaverma Sep 8, 2025
880e5ff
Merge pull request #2291 from madeline-underwood/mongoDB
pareenaverma Sep 8, 2025
fa7caae
Moving FP learning path to draft until Google feedback is addressed
pareenaverma Sep 8, 2025
f009b48
Merge pull request #2293 from pareenaverma/content_review
pareenaverma Sep 8, 2025
6e48b23
Update benchmarking.md
pareenaverma Sep 8, 2025
c76faeb
Update background.md
pareenaverma Sep 8, 2025
455bc45
Update flow.md
pareenaverma Sep 8, 2025
378e1bf
Update cca-trustee.md
pareenaverma Sep 8, 2025
6c25d00
Update cca-trustee.md
pareenaverma Sep 8, 2025
324815c
Update .wordlist.txt
pareenaverma Sep 8, 2025
343d0a1
Merge pull request #2294 from pareenaverma/content_review
pareenaverma Sep 8, 2025
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
57 changes: 56 additions & 1 deletion .wordlist.txt
Original file line number Diff line number Diff line change
Expand Up @@ -4666,5 +4666,60 @@ crosh
Sommelier
chromeos
linuxcontainers

XPS
NIC's
offlines
passthrough
SLOs
Ker
Rui
SmartNICs
selectedalt
UIalt
lpprojectubuntuarm
RDV
chiplet
BMC
upstreams
rdv
Initrd
handoff
ACPI
PCRs
MHU
Handoff
userland
CXL
DDR
PHYs
UCIe
handoffs
CCG
CML
Codespaces
Cheng
GDM
LPI
nsec
shortcode
BSON
joedog
Seige
Antonov
jwt
kbs
Nfpl
ZjnAMjLk
hCpeYsarnnGv
kbs
rvps
xcbTMTBX
CDH
RVPS
Attester
attester
ATtestation
CoCo
procedureS
NIC’s

Original file line number Diff line number Diff line change
Expand Up @@ -83,12 +83,26 @@ Specify that line_numbers are true in the following way:
\`\`\`bash { line_numbers = "true" } \
echo 'hello world' \
echo ‘I am line two’ \
\`\`\`
\`\`\`

```bash { line_numbers = "true" }
echo ‘hello world’
echo ‘I am line two’
```

In some cases, the line numbering should not start from one but from another
value, e.g. if the code excerpt is extracted from a larger file. Use the
`line_start` attribute to achieve this:

```bash { line_numbers = "true" }
echo ‘hello world’
echo ‘I am line two’
```
\`\`\`bash { line_numbers = "true" line_start = "10" } \
echo 'hello world' \
echo ‘I am line two’ \
\`\`\`

```bash { line_numbers = "true" line_start = "10" }
echo ‘hello world’
echo ‘I am line eleven’
```

### Output Lines

Expand All @@ -100,7 +114,7 @@ There are three ways you can specify command outputs in code:
{{% notice Note %}}
In each of the three situations, code marked as 'output' will:
- not be copied when clicking the 'copy' button
- not be highlightable by a cursor
- not be highlighted by a cursor
- appear slightly darker
{{% /notice %}}

Expand Down
Original file line number Diff line number Diff line change
@@ -1,6 +1,10 @@
---
title: Explore floating-point differences between x86 and Arm

draft: true
cascade:
draft: true

minutes_to_complete: 30

who_is_this_for: This is an introductory topic for developers who are porting applications from x86 to Arm and want to understand how floating-point behavior differs between these architectures - particularly in the context of numerical consistency, performance, and debugging subtle bugs.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -7,41 +7,41 @@ cascade:

minutes_to_complete: 90

who_is_this_for: This topic is for machine learning engineers, embedded AI developers, and researchers interested in deploying TinyML models for NLP on Arm-based edge devices using PyTorch and ExecuTorch.
who_is_this_for: This topic is for machine learning engineers, embedded AI developers, and researchers interested in deploying TinyML models for NLP on Arm-based edge devices using PyTorch and ExecuTorch.

learning_objectives:
learning_objectives:
- Train a custom CNN-based sentiment classification model implemented in PyTorch.
- Optimize and convert the model using ExecuTorch for Arm-based edge devices.
- Deploy and run inference on the Corstone-320 FVP.

prerequisites:
- Basic knowledge of machine learning concepts.
- It is advised to complete The Learning Path, [Introduction to TinyML on Arm using PyTorch and ExecuTorch](/learning-paths/embedded-and-microcontrollers/introduction-to-tinyml-on-arm) before starting this learning path.
- Basic knowledge of machine learning concepts.
- It is advised to complete The Learning Path, [Introduction to TinyML on Arm using PyTorch and ExecuTorch](/learning-paths/embedded-and-microcontrollers/introduction-to-tinyml-on-arm) before starting this learning path.
- Familiarity with Python and PyTorch.
- A Linux host machine or VM running Ubuntu 22.04 or higher.
- An Arm license to run the examples on the Corstone-320 Fixed Virtual Platform (FVP), for hands-on deployment.
- An Arm license to run the examples on the Corstone-320 Fixed Virtual Platform (FVP), for hands-on deployment.


author: Dominica Abena O. Amanfo

### Tags
skilllevels: Intermediate
skilllevels: Introductory
subjects: ML
armips:
- Cortex-A
tools_software_languages:
- tinyML
- CNN
- tinyML
- CNN
- PyTorch
- ExecuTorch

operatingsystems:
- Linux


further_reading:
- resource:
title: Run Llama 3 on a Raspberry Pi 5 using ExecuTorch
title: Run Llama 3 on a Raspberry Pi 5 using ExecuTorch
link: /learning-paths/embedded-and-microcontrollers/rpi-llama3
type: website
- resource:
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@ minutes_to_complete: 10

who_is_this_for: This is a topic for users of µVision who want to migrate to the new project format (csolution) required by CMSIS-Toolbox.

learning_objectives:
learning_objectives:
- Import, convert, and build uvprojx-based projects in Keil Studio.
- Convert uvprojx-based projects in µVision.
- Convert and build uvprojx-based projects on the command line.
Expand All @@ -19,7 +19,7 @@ prerequisites:
author: Christopher Seidl

### Tags
skilllevels: Intermediate
skilllevels: Advanced
subjects: Performance and Architecture
armips:
- Cortex-M
Expand All @@ -43,7 +43,7 @@ further_reading:
link: https://community.arm.com/arm-community-blogs/b/internet-of-things-blog/posts/keil-mdk-version-6
type: blog
- resource:
title: keil.arm.com
title: keil.arm.com
link: https://keil.arm.com
type: website

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -18,7 +18,7 @@ prerequisites:
author: Dawid Borycki

### Tags
skilllevels: Intermediate
skilllevels: Introductory
subjects: Migration to Arm
armips:
- Cortex-A
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@ minutes_to_complete: 25

who_is_this_for: Software developers of Android applications and mobile games who are interested in learning how to enable Arm Fixed Rate Compression (AFRC) to improve performance.

learning_objectives:
learning_objectives:
- Query for fixed-rate compression support.
- Specify what compression to use.
- Verify that compression is applied.
Expand All @@ -18,7 +18,7 @@ prerequisites:
author: Jose-Emilio Munoz-Lopez

### Tags
skilllevels: Intermediate
skilllevels: Advanced
subjects: Graphics
armips:
- Mali
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -45,6 +45,7 @@ cmake -DCMAKE_TOOLCHAIN_FILE=$ANDROID_NDK/build/cmake/android.toolchain.cmake \
-DEXECUTORCH_BUILD_KERNELS_CUSTOM=ON \
-DEXECUTORCH_BUILD_KERNELS_LLM=ON \
-DEXECUTORCH_BUILD_EXTENSION_LLM_RUNNER=ON \
-DEXECUTORCH_BUILD_EXTENSION_LLM=ON \
-DEXECUTORCH_BUILD_EXTENSION_RUNNER_UTIL=ON \
-DEXECUTORCH_XNNPACK_ENABLE_KLEIDI=ON \
-DXNNPACK_ENABLE_ARM_BF16=OFF \
Expand Down Expand Up @@ -82,6 +83,10 @@ cmake --build cmake-out-android/examples/models/llama -j16 --config Release

You should now have `llama_main` available for Android.

{{% notice Note %}}
If you notice that Gradle cannot find the Android SDK, add the sdk.dir path to executorch/extension/android/local.properties.
{{% /notice %}}

## Run on Android via adb shell
You will need an Arm-powered smartphone with the i8mm feature running Android, with 16GB of RAM. The following steps were tested on a Google Pixel 8 Pro phone.

Expand All @@ -103,7 +108,7 @@ You should see your device listed to confirm it is connected.

``` bash
adb shell mkdir -p /data/local/tmp/llama
adb push llama3_1B_kv_sdpa_xnn_qe_4_128_1024_embedding_4bit.pte /data/local/tmp/llama/
adb push llama3_1B_kv_sdpa_xnn_qe_4_64_1024_embedding_4bit.pte /data/local/tmp/llama/
adb push $HOME/.llama/checkpoints/Llama3.2-1B-Instruct/tokenizer.model /data/local/tmp/llama/
adb push cmake-out-android/examples/models/llama/llama_main /data/local/tmp/llama/
```
Expand All @@ -114,49 +119,53 @@ adb push cmake-out-android/examples/models/llama/llama_main /data/local/tmp/llam
Use the Llama runner to execute the model on the phone with the `adb` command:

``` bash
adb shell "cd /data/local/tmp/llama && ./llama_main --model_path llama3_1B_kv_sdpa_xnn_qe_4_64_1024_embedding_4bit.pte --tokenizer_path tokenizer.model --prompt "<|start_header_id|>system<|end_header_id|>\nYour name is Cookie. you are helpful, polite, precise, concise, honest, good at writing. You always give precise and brief answers up to 32 words<|eot_id|><|start_header_id|>user<|end_header_id|>\nHey Cookie! how are you today?<|eot_id|><|start_header_id|>assistant<|end_header_id|>" --warmup=1 --cpu_threads=5
adb shell "cd /data/local/tmp/llama && ./llama_main --model_path llama3_1B_kv_sdpa_xnn_qe_4_64_1024_embedding_4bit.pte --tokenizer_path tokenizer.model --prompt "<|start_header_id|>system<|end_header_id|>\nYour name is Cookie. you are helpful, polite, precise, concise, honest, good at writing. You always give precise and brief answers up to 32 words<|eot_id|><|start_header_id|>user<|end_header_id|>\nHey Cookie! how are you today?<|eot_id|><|start_header_id|>assistant<|end_header_id|>" --warmup=1 --cpu_threads=5"
```

The output should look something like this.

```
I 00:00:00.003316 executorch:main.cpp:69] Resetting threadpool with num threads = 5
I 00:00:00.009329 executorch:runner.cpp:59] Creating LLaMa runner: model_path=llama3_1B_kv_sdpa_xnn_qe_4_64_1024_embedding_4bit.pte, tokenizer_path=tokenizer.model
I 00:00:03.569399 executorch:runner.cpp:88] Reading metadata from model
I 00:00:03.569451 executorch:runner.cpp:113] Metadata: use_sdpa_with_kv_cache = 1
I 00:00:03.569455 executorch:runner.cpp:113] Metadata: use_kv_cache = 1
I 00:00:03.569459 executorch:runner.cpp:113] Metadata: get_vocab_size = 128256
I 00:00:03.569461 executorch:runner.cpp:113] Metadata: get_bos_id = 128000
I 00:00:03.569464 executorch:runner.cpp:113] Metadata: get_max_seq_len = 1024
I 00:00:03.569466 executorch:runner.cpp:113] Metadata: enable_dynamic_shape = 1
I 00:00:03.569469 executorch:runner.cpp:120] eos_id = 128009
I 00:00:03.569470 executorch:runner.cpp:120] eos_id = 128001
I 00:00:03.569471 executorch:runner.cpp:120] eos_id = 128006
I 00:00:03.569473 executorch:runner.cpp:120] eos_id = 128007
I 00:00:03.569475 executorch:runner.cpp:168] Doing a warmup run...
I 00:00:03.838634 executorch:text_prefiller.cpp:53] Prefill token result numel(): 128256

I 00:00:03.892268 executorch:text_token_generator.h:118]
I tokenizers:regex.cpp:27] Registering override fallback regex
I 00:00:00.003288 executorch:main.cpp:87] Resetting threadpool with num threads = 5
I 00:00:00.006393 executorch:runner.cpp:44] Creating LLaMa runner: model_path=llama3_1B_kv_sdpa_xnn_qe_4_64_1024_embedding_4bit.pte, tokenizer_path=tokenizer.model
E tokenizers:hf_tokenizer.cpp:60] Error parsing json file: [json.exception.parse_error.101] parse error at line 1, column 1: syntax error while parsing value - invalid literal; last read: 'I'
I 00:00:00.131486 executorch:llm_runner_helper.cpp:57] Loaded TikToken tokenizer
I 00:00:00.131525 executorch:llm_runner_helper.cpp:167] Reading metadata from model
I 00:00:00.186538 executorch:llm_runner_helper.cpp:110] Metadata: use_sdpa_with_kv_cache = 1
I 00:00:00.186574 executorch:llm_runner_helper.cpp:110] Metadata: use_kv_cache = 1
I 00:00:00.186578 executorch:llm_runner_helper.cpp:110] Metadata: get_max_context_len = 1024
I 00:00:00.186584 executorch:llm_runner_helper.cpp:110] Metadata: get_max_seq_len = 1024
I 00:00:00.186588 executorch:llm_runner_helper.cpp:110] Metadata: enable_dynamic_shape = 1
I 00:00:00.186596 executorch:llm_runner_helper.cpp:140] eos_id = 128009
I 00:00:00.186597 executorch:llm_runner_helper.cpp:140] eos_id = 128001
I 00:00:00.186599 executorch:llm_runner_helper.cpp:140] eos_id = 128006
I 00:00:00.186600 executorch:llm_runner_helper.cpp:140] eos_id = 128007
I 00:00:01.086570 executorch:text_llm_runner.cpp:89] Doing a warmup run...
I 00:00:01.087836 executorch:text_llm_runner.cpp:152] Max new tokens resolved: 128, given start_pos 0, num_prompt_tokens 54, max_context_len 1024
I 00:00:01.292740 executorch:text_prefiller.cpp:93] Prefill token result numel(): 128256

I 00:00:02.264371 executorch:text_token_generator.h:123]
Reached to the end of generation
I 00:00:03.892281 executorch:runner.cpp:267] Warmup run finished!
I 00:00:03.892286 executorch:runner.cpp:174] RSS after loading model: 1269.445312 MiB (0 if unsupported)
<|start_header_id|>system<|end_header_id|>\nYour name is Cookie. you are helpful, polite, precise, concise, honest, good at writing. You always give precise and brief answers up to 32 words<|eot_id|><|start_header_id|>user<|end_header_id|>\nHey Cookie! how are you today?<|eot_id|><|start_header_id|>assistant<|end_header_id|>I 00:00:04.076905 executorch:text_prefiller.cpp:53] Prefill token result numel(): 128256


I 00:00:04.078027 executorch:runner.cpp:243] RSS after prompt prefill: 1269.445312 MiB (0 if unsupported)
I'm doing great, thanks! I'm always happy to help, communicate, and provide helpful responses. I'm a bit of a cookie (heh) when it comes to delivering concise and precise answers. What can I help you with today?<|eot_id|>
I 00:00:05.399304 executorch:text_token_generator.h:118]
I 00:00:02.264379 executorch:text_llm_runner.cpp:209] Warmup run finished!
I 00:00:02.264384 executorch:text_llm_runner.cpp:95] RSS after loading model: 1122.187500 MiB (0 if unsupported)
I 00:00:02.264624 executorch:text_llm_runner.cpp:152] Max new tokens resolved: 74, given start_pos 0, num_prompt_tokens 54, max_context_len 1024
<|start_header_id|>system<|end_header_id|>\nYour name is Cookie. you are helpful, polite, precise, concise, honest, good at writing. You always give precise and brief answers up to 32 words<|eot_id|><|start_header_id|>user<|end_header_id|>\nHey Cookie! how are you today?<|eot_id|><|start_header_id|>assistant<|end_header_id|>I 00:00:02.394162 executorch:text_prefiller.cpp:93] Prefill token result numel(): 128256


I 00:00:02.394373 executorch:text_llm_runner.cpp:179] RSS after prompt prefill: 1122.187500 MiB (0 if unsupported)
I'm doing great, thanks for asking! I'm always ready to help, whether it's answering a question or providing a solution. What can I help you with today?<|eot_id|>
I 00:00:03.072966 executorch:text_token_generator.h:123]
Reached to the end of generation
I 00:00:05.399314 executorch:runner.cpp:257] RSS after finishing text generation: 1269.445312 MiB (0 if unsupported)
PyTorchObserver {"prompt_tokens":54,"generated_tokens":51,"model_load_start_ms":1710296339487,"model_load_end_ms":1710296343047,"inference_start_ms":1710296343370,"inference_end_ms":1710296344877,"prompt_eval_end_ms":1710296343556,"first_token_ms":1710296343556,"aggregate_sampling_time_ms":49,"SCALING_FACTOR_UNITS_PER_SECOND":1000}
I 00:00:04.530945 executorch:stats.h:108] Prompt Tokens: 54 Generated Tokens: 69
I 00:00:04.530947 executorch:stats.h:114] Model Load Time: 1.196000 (seconds)
I 00:00:04.530949 executorch:stats.h:124] Total inference time: 1.934000 (seconds) Rate: 35.677353 (tokens/second)
I 00:00:04.530952 executorch:stats.h:132] Prompt evaluation: 0.176000 (seconds) Rate: 306.818182 (tokens/second)
I 00:00:04.530954 executorch:stats.h:143] Generated 69 tokens: 1.758000 (seconds) Rate: 39.249147 (tokens/second)
I 00:00:04.530956 executorch:stats.h:151] Time to first generated token: 0.176000 (seconds)
I 00:00:04.530959 executorch:stats.h:158] Sampling time over 123 tokens: 0.067000 (seconds)

I 00:00:03.072972 executorch:text_llm_runner.cpp:199] RSS after finishing text generation: 1122.187500 MiB (0 if unsupported)
PyTorchObserver {"prompt_tokens":54,"generated_tokens":36,"model_load_start_ms":1756473387815,"model_load_end_ms":1756473388715,"inference_start_ms":1756473389893,"inference_end_ms":1756473390702,"prompt_eval_end_ms":1756473390023,"first_token_ms":1756473390023,"aggregate_sampling_time_ms":22,"SCALING_FACTOR_UNITS_PER_SECOND":1000}
I 00:00:03.072993 executorch:stats.h:108] Prompt Tokens: 54 Generated Tokens: 36
I 00:00:03.072995 executorch:stats.h:114] Model Load Time: 0.900000 (seconds)
I 00:00:03.072996 executorch:stats.h:124] Total inference time: 0.809000 (seconds) Rate: 44.499382 (tokens/second)
I 00:00:03.072998 executorch:stats.h:132] Prompt evaluation: 0.130000 (seconds) Rate: 415.384615 (tokens/second)
I 00:00:03.073000 executorch:stats.h:143] Generated 36 tokens: 0.679000 (seconds) Rate: 53.019146 (tokens/second)
I 00:00:03.073002 executorch:stats.h:151] Time to first generated token: 0.130000 (seconds)
I 00:00:03.073004 executorch:stats.h:158] Sampling time over 90 tokens: 0.022000 (seconds)
```

You have successfully run the Llama 3.1 1B Instruct model on your Android smartphone with ExecuTorch using KleidiAI kernels.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@ minutes_to_complete: 40

who_is_this_for: Unity developers wanting to analyze the performance of their apps on Android devices

learning_objectives:
learning_objectives:
- Deploy to Android
- Profile code running on an Android device
- Analyze performance data
Expand All @@ -19,7 +19,7 @@ prerequisites:
author: Arm

### Tags
skilllevels: Intermediate
skilllevels: Introductory
subjects: Performance and Architecture
armips:
- armv8
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@ minutes_to_complete: 120

who_is_this_for: This Learning Path is for Vulkan developers who are familiar with rendering and are interested in deploying ray tracing in their applications.

learning_objectives:
learning_objectives:
- Describe how the Vulkan ray tracing API works.
- Describe how to use ray tracing to implement realistic shadows, reflections, and refractions.
- Implement basic ray tracing effects in a Vulkan renderer.
Expand All @@ -18,7 +18,7 @@ prerequisites:
author: Iago Calvo Lista

### Tags
skilllevels: Intermediate
skilllevels: Advanced
subjects: Graphics
armips:
- Mali
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@ minutes_to_complete: 90

who_is_this_for: Developers who want to optimize their Unity apps on Android

learning_objectives:
learning_objectives:
- Use Arm Neon intrinsics in your Unity C# scripts
- Optimize your code
- Collect and compare performance data using the Unity Profiler and Analyzer tools
Expand All @@ -19,7 +19,7 @@ prerequisites:
author: Arm

### Tags
skilllevels: Intermediate
skilllevels: Advanced
subjects: Gaming
armips:
- armv8
Expand Down
Loading