diff --git a/.github/copilot-instructions.md b/.github/copilot-instructions.md index b9cc2dc326..80c4471325 100644 --- a/.github/copilot-instructions.md +++ b/.github/copilot-instructions.md @@ -1,239 +1,283 @@ -# Project Overview - -This project is a static website created using Hugo and markdown files. The purpose of the content is to explain how-to topics to software developers targeting various Arm platforms. - -Assume the audience is made up of Arm software developers. Bias information toward Arm platforms. For Linux, assume systems are aarch64 architecture and not x86. Readers also use macOS and Windows on Arm systems, and assume Arm architecture where relevant. - -## Project structure - -The key directories are: - -### Top Level Structure - - /content - The main directory containing all Learning Paths and install guides as markdown files - /themes - HTML templates and styling elements that render the content into the final website - /tools - Python scripts for automated website integrity checking - config.toml - High-level Hugo configuration settings - -### Content Organization: - -The /content directory is organized into: - -- learning-paths/ Core learning content organized by categories: - -- embedded-and-microcontrollers/ MCU, IoT, and embedded development topics - -- servers-and-cloud-computing/ Server, cloud, and enterprise computing topics - -- mobile-graphics-and-gaming/ Mobile app development, graphics, and gaming - -- cross-platform/ Cross-platform development and general programming topics, these appear in multiple categories on the website - -- laptops-and-desktops/ Desktop application development, primarily Windows on Arm and macOS - -- automotive/ Automotive and ADAS development - -- iot/ IoT-specific Learning Paths - -- install-guides/ - Tool installation guides with supporting subdirectories organized by tool categories like docker/, gcc/, license/, browsers/, plus an _images/ directory for screenshots and diagrams - -These are special directories and not used for regular content creation: - migration/ Migration guides and resources, this maps to https://learn.arm.com/migration - lists/ Content listing and organization files, this maps to https://learn.arm.com/lists - stats/ Website statistics and analytics, this maps to https://learn.arm.com/stats - -The /content directory is the primary workspace where contributors add new Learning Paths as markdown files, organized into category-specific subdirectories that correspond to the different learning path topics available on the site at https://learn.arm.com/. - -## Content requirements - -Read the files in the directory `content/learning-paths/cross-platform/_example-learning-path` for information about how Learning Path content should be created. Some additional help is listed below. - -### Content structure - -Each Learning Path must have an _index.md file and a _next-steps.md file. The _index.md file contains the main content of the Learning Path. The _next-steps.md file contains links to related content and is included at the end of the Learning Path. - -Additional resources and 'next steps' content should be placed in the `further_reading` section of `_index.md`, NOT in `_next-steps.md`. The `_next-steps.md` file should remain minimal and unmodified as indicated by "FIXED, DO NOT MODIFY" comments in the template. - -The _index.md file should contain the following front matter and content sections: - -Front Matter (YAML format): -- `title`: Imperative heading following the [verb] + [technology] + [outcome] format -- `weight`: Numerical ordering for display sequence, weight is 1 for _index.md and each page is ordered by weight, no markdown files should have the same weight in a directory -- `layout`: Template type (usually "learningpathall") -- `minutes_to_complete`: Realistic time estimate for completion -- `prerequisites`: List of required knowledge, tools, or prior learning paths -- `author_primary`: Main contributor's name, multiple authors can be listed separated using - on new lines -- `subjects`: Technology categories for filtering and search, this is a closed list and must match one of the subjects listed on https://learn.arm.com/learning-paths/cross-platform/_example-learning-path/write-2-metadata/ -- `armips`: Relevant Arm IP, stick to Neoverse, Cortex-A, Cortex-M, etc. Don't list specific CPU models or Arm architecture versions -- `tools_software_languages`: Open category listing Programming languages, frameworks, and development tools used -- `skilllevels`: Skill levels allowed are only Introductory and Advanced -- `operatingsystems`: Operating systems used, must match the closed list on https://learn.arm.com/learning-paths/cross-platform/_example-learning-path/write-2-metadata/ - -### Further Reading Curation - -Limit further_reading resources to 4-6 essential links. Prioritize: -- Direct relevance to the topic -- Arm-specific Learning Paths over generic external resources -- Foundation knowledge for target audience -- Required tools (install guides) -- Logical progression from basic to advanced - -Avoid overwhelming readers with too many links, which can cause them to leave the platform. - -All Learning Paths should generally include: -Title: [Imperative verb] + [technology/tool] + [outcome] -Introduction paragraph: Context + user goal + value proposition -Prerequisites section with explicit requirements and links -Learning objectives: 3-4 bulleted, measurable outcomes with action verbs -Step-by-step sections with logical progression -Clear next steps/conclusion - -For title formatting: -- MUST use imperative voice ("Deploy", "Configure", "Build", "Create") -- MUST include SEO keywords (technology names, tools) -- Examples: "Deploy applications on Arm servers", "Configure Arm processors for optimal performance" - -Learning Path should always be capitalized. - -### Writing style - -Voice and Tone: -- Second person ("you", "your") - NEVER first person ("I", "we") -- Active voice - AVOID passive constructions -- Present tense for descriptions -- Imperative mood for commands -- Confident and developer-friendly tone -- Encouraging language for complex tasks - -Sentence Structure: -- Average 15-20 words per sentence -- Split complex sentences for scalability -- Plain English - avoid jargon overload -- US spellings required (organize/optimize/realize, not organise/optimise/realise) -- "Arm" capitalization required (Arm processors/Neoverse, never ARM or arm; exceptions: "arm64" and "aarch64" are permitted in code, commands, and outputs) -- Define acronyms on first use -- Parallel structure in all lists - -### Arm naming and architecture terms - -- Use Arm for the brand in prose (for example, "Arm processors", "Arm servers"). -- Use arm64 or aarch64 for the CPU architecture; these are acceptable and interchangeable labels. Prefer whichever term a tool, package, or OS uses natively. -- Do not use ARM in any context. -- ARM64 is used by Windows on Arm and Microsoft documentation, so it is acceptable to use ARM64 when specifically referring to Windows on Arm. -- In code blocks, CLI flags, package names, file paths, and outputs, keep the exact casing used by the tool (for example, --arch arm64, uname -m → aarch64). - -### Heading guidelines - -HEADING TYPES: -- Conceptual headings: When explaining technology/motivation ("What is containerization?") -- Imperative headings: When user takes action ("Configure the database") -- Interrogative headings: For FAQ content ("How does Arm differ from x86?") -- ALL headings: Use sentence case (first word capitalized, rest lowercase except proper nouns) - -HIERARCHY: -H1: Page title (imperative + technology + outcome) -H2: Major workflow steps or conceptual sections -H3: Sub-procedures or detailed explanations -H4: Specific technical details or troubleshooting - -### Code samples and formatting - -CONTEXT-BEFORE-CODE RULE: -- ALWAYS provide explanation before code blocks -- Format: [What it does] → [Code] → [Expected outcome] → [Key parameters] - -CODE FORMATTING: - -Use markdown tags for programming languages like bash, python, yaml, json, etc. - -Use console or bash for general commands. Try to use the same one throughout a Learning Path. - -Correct format: - -Use the following command to install required packages: - -```bash -sudo apt-get update && sudo apt-get install -y python3 nodejs -``` - -Use the output tag to show expected command output. - -```output -Reading package lists... Done -Building dependency tree... Done -``` - -FORMATTING STANDARDS: -- **Bold text**: UI elements (buttons, menu items, field names) -- **Italic text**: Emphasis and new terms -- **Code formatting**: Use for file names, commands, code elements - -Use shortcodes for common pitfalls, warnings, important notes - -{{% notice Note %}} -An example note to pay attention to. -{{% /notice %}} - -{{% notice Warning %}} -A warning about a common pitfall. -{{% /notice %}} - -## Avoid looking like AI-generated content - -### Bullet List Management -WARNING SIGNS OF OVER-BULLETING: -- More than 3 consecutive sections using bullet lists -- Bullet points that could be combined into narrative paragraphs -- Lists where items don't have parallel structure -- Bullet points that are actually full sentences better suited for paragraphs - -CONVERSION STRATEGY: - -Use flowing narrative instead of excessive bullets. - -For example, use this format instead of the list below it. - -Arm processors deliver improved performance while enhancing security through hardware-level protections. This architecture provides enhanced scalability for cloud workloads and reduces operational costs through energy efficiency. - -Key benefits include: -• Improved performance -• Better security -• Enhanced scalability -• Reduced costs - -### Natural Writing Patterns - -HUMAN-LIKE TECHNIQUES: -- Vary sentence length: Mix short, medium, and complex sentences -- Use transitional phrases: "Additionally", "However", "As a result", "Furthermore" -- Include contextual explanations: Why something matters, not just what to do -- Add relevant examples: Real-world scenarios that illustrate concepts -- Connect ideas logically: Show relationships between concepts and steps - -CONVERSATIONAL ELEMENTS: - -Instead of: "Execute the following command:" -Use: "Now that you've configured the environment, run the following command to start the service:" - -Instead of: "This provides benefits:" -Use: "You'll notice several advantages with this approach, particularly when working with..." - -## Hyperlink guidelines - -Some links are useful in content, but too many links can be distracting and readers will leave the platform following them. Try to put only necessary links in the content and put other links in the "Next Steps" section at the end of the content. Flag any page with too many links for review. - -### Internal links - -Use the full path format for internal links: `/learning-paths/category/path-name/` (e.g., `/learning-paths/cross-platform/docker/`). Do NOT use relative paths like `../path-name/`. - -Examples: -- /learning-paths/servers-and-cloud-computing/csp/ (Arm-based instance) -- /learning-paths/cross-platform/docker/ (Docker learning path) - -### External links - -Use the full URL for external links that are not on learn.arm.com, these open in a new tab. - -### Link Verification Process - -When creating Learning Path content: -- Verify internal links exist before adding them -- Use semantic search or website browsing to confirm Learning Path availability -- Prefer verified external authoritative sources over speculative internal links -- Test link formats against existing Learning Path examples -- Never assume Learning Paths exist without verification - -This instruction set enables high-quality Arm Learning Paths content while maintaining consistency and technical accuracy. \ No newline at end of file +# Project Overview + +This project is a collection of "learning paths" (long-form tutorials) and "install guides" (shorter software installation guides), hosted on a static website using Hugo and markdown files. The content explains how to develop software on Arm for software developers targeting various Arm platforms. + +Assume the audience is made up of Arm software developers. Bias all information toward Arm platforms. For Linux, assume systems are aarch64 architecture and not x86. Readers also use macOS and Windows on Arm systems, and assume Arm architecture where relevant. + +## Project structure + +The key directories are: + +### Top level structure + +/content - The main directory containing all Learning Paths and install guides as markdown files +/themes - HTML templates and styling elements that render the content into the final website +/tools - Python scripts for automated website integrity checking +config.toml - High-level Hugo configuration settings + +### Content organization: + +The /content directory is organized into: + +- learning-paths/ Core learning content organized by categories: + -- embedded-and-microcontrollers/ MCU, IoT, and embedded development topics + -- servers-and-cloud-computing/ Server, cloud, and enterprise computing topics + -- mobile-graphics-and-gaming/ Mobile app development, graphics, and gaming + -- cross-platform/ Cross-platform development and general programming topics, these appear in multiple categories on the website + -- laptops-and-desktops/ Desktop application development, primarily Windows on Arm and macOS + -- automotive/ Automotive and ADAS development + -- iot/ IoT-specific Learning Paths + +- install-guides/ - Tool installation guides with supporting subdirectories organized by tool categories like docker/, gcc/, license/, browsers/, plus an _images/ directory for screenshots and diagrams + +These are special directories and not used for regular content creation: + migration/ Migration guides and resources, this maps to https://learn.arm.com/migration + lists/ Content listing and organization files, this maps to https://learn.arm.com/lists + stats/ Website statistics and analytics, this maps to https://learn.arm.com/stats + +The /content directory is the primary workspace where contributors add new Learning Paths as markdown files, organized into category-specific subdirectories that correspond to the different learning path topics available on the site at https://learn.arm.com/. + +## Content requirements + +Read the files in the directory `content/learning-paths/cross-platform/_example-learning-path` for information about how Learning Path content should be created. Also see the guidelines below. + +### Content structure + +Each Learning Path must have an _index.md file and a _next-steps.md file. The _index.md file contains the main content of the Learning Path. The _next-steps.md file contains links to related content and is included at the end of the Learning Path. + +Additional resources and 'next steps' content should be placed in the `further_reading` section of `_index.md`, NOT in `_next-steps.md`. The `_next-steps.md` file should remain minimal and unmodified as indicated by "FIXED, DO NOT MODIFY" comments in the template. + +The _index.md file should contain the following front matter and content sections: + +Front Matter (YAML format): +- `title`: Imperative heading following the [verb] + [technology] + [outcome] format +- `weight`: Numerical ordering for display sequence, weight is 1 for _index.md and each page is ordered by weight, no markdown files should have the same weight in a directory +- `layout`: Template type (usually "learningpathall") +- `minutes_to_complete`: Realistic time estimate for completion +- `prerequisites`: List of required knowledge, tools, or prior learning paths +- `author`: Main contributor's name, multiple authors can be listed separated using - on new lines +- `subjects`: Technology categories for filtering and search, this is a closed list and must match one of the subjects listed on https://learn.arm.com/learning-paths/cross-platform/_example-learning-path/write-2-metadata/ +- `armips`: Relevant Arm IP, stick to Neoverse, Cortex-A, Cortex-M, etc. Don't list specific CPU models or Arm architecture versions +- `tools_software_languages`: Open category listing Programming languages, frameworks, and development tools used +- `skilllevels`: Skill levels allowed are only Introductory and Advanced +- `operatingsystems`: Operating systems used, must match the closed list on https://learn.arm.com/learning-paths/cross-platform/_example-learning-path/write-2-metadata/ + +### Further reading curation + +Limit further_reading resources to four to six essential links. Prioritize: +- Direct relevance to the topic +- Arm-specific Learning Paths over generic external resources +- Foundation knowledge for target audience +- Required tools (install guides) +- Logical progression from basic to advanced + +Avoid overwhelming readers with too many links, which can cause them to leave the platform. + +All Learning Paths should generally include: +Title: [Imperative verb] + [technology/tool] + [outcome] +Introduction paragraph: Context + user goal + value proposition +Prerequisites section with explicit requirements and links +Learning objectives: three to four bulleted, measurable outcomes with action verbs +Step-by-step sections with logical progression +Clear next steps/conclusion + +For title formatting: +- MUST use imperative voice ("Deploy", "Configure", "Build", "Create") +- MUST include SEO keywords (technology names, tools) +- Examples: "Deploy applications on Arm servers", "Configure Arm processors for optimal performance" + +The term "Learning Path" should always be capitalized. + +### Writing style + +Voice and Tone: +- Second person ("you", "your") - NEVER first person ("I", "we") +- Active voice - AVOID passive constructions +- Present tense for descriptions +- Imperative mood for commands +- Confident and developer-friendly tone +- Encouraging language for complex tasks +- Use inclusive language: + - Use "primary/subordinate" instead of "master/slave" terminology + - Don't use gendered examples or assumptions + - Be mindful of cultural references that might not translate globally + - Focus on clear, accessible language for all developers + +### Sentence structure and clarity +- Average 15-20 words per sentence +- Split complex sentences for scalability +- Plain English - avoid jargon overload +- US spellings required (organize/optimize/realize, not organise/optimise/realise) +- "Arm" capitalization required (Arm processors/Neoverse, never ARM or arm; exceptions: "arm64" and "aarch64" are permitted in code, commands, and outputs) +- Define acronyms on first use +- Parallel structure in all lists + +### Readability and section flow +- Flag any section over 700 words and suggest natural split points +- Warn if more than 300 words appear between code examples +- Identify paragraphs with sentences averaging over 20 words +- Note sections introducing more than 2 new concepts +- Flag pages over 3500 words total +- Note sections that might benefit from encouragement or progress markers +- Identify missing celebration of progress or milestones +- Recap what learners have accomplished at section ends +- Provide "check your understanding" moments that aren't intimidating +- Too much explanation is exhausting, too little is confusing +- Use visual breaks to prevent walls of text - code blocks count as visual breaks +- Walls of text cause people to bounce from the page +- If you're explaining 3+ things in one section, split it into separate sections +- Each code block should be preceded by one to three sentences explaining what it does. + +### Word choice and style +- Use these preferred terms and phrases for consistency: + - Numbers and units: Spell out numbers one through five (one, two, three, four, five), after this use numerals (6, 7, 8...). Use proper spacing for units: "1 GB", "23 MB/day" (not "1GB", "23MB/day"). Use "K" for thousands: "64K" (not "64k"). Use abbreviations for data rates: "Gbps" (not "Gb per second"). + - Common phrases: "To [action]" (not "Follow the steps below to [action]"), "for example" (not "e.g."), "that is" (not "i.e."), "because" (not "since"), "also" (not "in addition"), "to" (not "in order to"), "see" (not "refer to"), "use" (not "utilize" or "leverage"), "need" (not "require"), "can" or "might" (not "may"), "set up" as verb, "setup" as noun, "therefore" (not "ergo"), "namely" (not "viz."), "avoid" (not "try not to"). + - Avoid condescending language: Don't use "simply", "just", "obviously", "clearly" - what's simple to you might not be to the learner. + - Acknowledge when something can be tricky: Use phrases like "this step can be confusing at first" to validate learner experience. + - Normalize errors: Use phrases like "if you see this error, here's how to fix it" to reassure learners that errors are part of the learning process. + - User interface terms: "select" or "tap" (not "click" for mobile/touch interfaces), "keyboard shortcut" (not "key combination"), "Ctrl key" (capitalized), "double-tap" (not "double-click" for touch interfaces). + - Contractions and simplification: Use contractions such as: "don't", "isn't", "it's", "that's", "you're", "you'll". Remove unnecessary qualifiers: Remove "quite", "very", "massive" → "significant". "an LLM" (not "a LLM"). "easy-to-use" when used as adjective. "fixed-width" (not "fixed-length"). "read-to-write ratio" (not "read to write ratio"). + +## Content structure and consistency + +### Cross-file and quality assurance +- Use the same technical terms consistently throughout all sections +- Apply the word choice and style guidelines uniformly across all files +- Maintain consistent capitalization of product names, technologies, and concepts +- Use the same abbreviations and acronyms throughout (define once, use consistently) +- Maintain the same voice and tone across all sections +- Ensure consistent use of second person ("you", "your") throughout +- Apply the same level of formality and technical depth across sections +- Keep instructional style consistent (imperative mood for actions) +- Follow consistent heading hierarchy throughout the Learning Path +- Use parallel structure in similar sections across different files +- Maintain consistent section organization and flow +- Apply uniform formatting for code blocks, lists, and callouts +- Ensure appropriate skill level consistency (Introductory or Advanced) +- Maintain consistent technical detail appropriate for the target audience +- Balance complexity appropriately across all sections +- Provide consistent prerequisite assumptions throughout +- Flag inconsistent terminology usage across sections +- Identify missing error handling or troubleshooting guidance +- Suggest where visual aids (diagrams, screenshots) would improve understanding +- Recommend splitting overly complex sections +- Verify that code examples follow established patterns in the repository + +## Formatting and code samples + +### Heading guidelines +- Use sentence case for all headings (first word capitalized, rest lowercase except proper nouns) +- Heading types: + - Conceptual headings: When explaining technology/motivation ("What is containerization?") + - Imperative headings: When user takes action ("Configure the database") + - Interrogative headings: For FAQ content ("How does Arm differ from x86?") +- Hierarchy: + - H1: Page title (imperative + technology + outcome) + - H2: Major workflow steps or conceptual sections + - H3: Sub-procedures or detailed explanations + - H4: Specific technical details or troubleshooting + +### Code samples and formatting +- ALWAYS provide explanation before code blocks +- Format: [What it does] → [Code] → [Expected outcome] → [Key parameters] +- Use markdown tags for programming languages like bash, python, yaml, json, etc. +- Use console or bash for general commands. Try to use the same one throughout a Learning Path. +- Use the output tag to show expected command output. +- Output descriptions: Use "The output is similar to:" or "The expected output is:" (not "The output will look like:"). Use "builds" (not "will build") and "gives" (not "would give") for present tense descriptions. +- Formatting standards: **Bold text** for UI elements (buttons, menu items, field names), *Italic text* for emphasis and new terms, `Code formatting` for file names, commands, code elements. +- Use shortcodes for common pitfalls, warnings, important notes. + +## Arm naming and architecture terms +- Use Arm for the brand in prose (for example, "Arm processors", "Arm servers"). +- Use arm64 or aarch64 for the CPU architecture; these are acceptable and interchangeable labels. Prefer whichever term a tool, package, or OS uses natively. +- Always use "Arm" (not "ARM") in all contexts except when referring to specific technical terms that require the original casing. +- ARM64 is used by Windows on Arm and Microsoft documentation, so it is acceptable to use ARM64 when specifically referring to Windows on Arm. +- In code blocks, CLI flags, package names, file paths, and outputs, keep the exact casing used by the tool (for example, --arch arm64, uname -m → aarch64). + +## Hyperlink guidelines +- Use the full path format for internal links: `/learning-paths/category/path-name/` (e.g., `/learning-paths/cross-platform/docker/`). Do NOT use relative paths like `../path-name/`. +- Use the full URL for external links that are not on learn.arm.com, these open in a new tab. +- When creating Learning Path content: + - Verify internal links exist before adding them + - Use semantic search or website browsing to confirm Learning Path availability + - Prefer verified external authoritative sources over speculative internal links + - Test link formats against existing Learning Path examples + - Never assume Learning Paths exist without verification +- Some links are useful in content, but too many links can be distracting and readers will leave the platform following them. Include only necessary links in the content; place others in the "Next Steps" section at the end. Flag any page with too many links for review. + +## Avoid looking like AI-generated content +- Warning signs of over-bulleting: More than 3 consecutive sections using bullet lists, bullet points that could be combined into narrative paragraphs, lists where items don't have parallel structure, bullet points that are actually full sentences better suited for paragraphs. +- Use flowing narrative instead of excessive bullets. +- Use natural writing patterns: Vary sentence length, use transitional phrases, include contextual explanations, add relevant examples, connect ideas logically. +- Use conversational elements: Instead of "Execute the following command:", use "Now that you've configured the environment, run the following command to start the service:". Instead of "This provides benefits:", use "You'll notice several advantages with this approach, particularly when working with...". + +## AI-specific guidelines for content creation and editing + +### Context awareness +- Consider the learner's likely environment (development vs. production, local vs. cloud) +- Recognize when content assumes x86 defaults and suggest Arm alternatives +- Flag when third-party tools may have limited Arm support +- Suggest Arm-native alternatives when available (e.g., Arm compilers, optimized libraries) + +### Technical depth consistency +- Maintain appropriate complexity level throughout the Learning Path +- Avoid oversimplifying for Advanced skill level content +- Don't assume prior knowledge beyond stated prerequisites +- Balance theoretical explanation with practical implementation + +### Platform-specific considerations +- Default to Arm-optimized solutions and configurations +- Mention x86 alternatives only when Arm solutions don't exist +- Consider performance implications specific to Arm architectures +- Address common Arm migration challenges when relevant + +### Quality assurance +- Flag inconsistent terminology usage across sections +- Identify missing error handling or troubleshooting guidance +- Suggest where visual aids (diagrams, screenshots) would improve understanding +- Recommend splitting overly complex sections +- Verify that code examples follow established patterns in the repository + +### Accessibility and inclusivity +- Ensure content is screen reader compatible +- Provide descriptive alt text for images and diagrams +- Use clear, descriptive link text (not "click here" or "read more") +- Avoid assumptions about user's physical capabilities or setup + +### SEO and discoverability +- Use Arm-specific keywords naturally throughout content +- Include relevant technical terms that developers search for +- Optimize titles and headings for search engines +- Use semantic HTML structure in markdown when possible +- Consider how content will appear in search results + +### Cross-reference validation +- Verify all internal links point to existing content +- Check that referenced Learning Paths and install guides are current +- Ensure cross-references between sections remain accurate after edits +- Flag broken or outdated external links +- Maintain consistency in how related content is referenced + +### Performance testing guidance +- Include benchmarks when comparing Arm vs. x86 performance +- Suggest performance testing steps for resource-intensive applications +- Recommend profiling tools that work well on Arm platforms +- Include guidance on measuring and optimizing for Arm-specific performance characteristics +- Mention when performance improvements are architecture-specific + +### AI optimization (AIO) guidance +- Structure content with clear, semantic headings that AI can parse and understand +- Use descriptive, standalone sentences that make sense without surrounding context +- Include explicit problem statements and clear solutions for AI to reference +- Format code examples with proper language tags and clear explanations +- Use consistent terminology that AI systems can reliably associate with Arm development +- Include complete, self-contained examples rather than partial snippets +- Write FAQ-style sections that directly answer common developer questions +- Use bullet points and numbered lists for AI to easily extract key information +- Include explicit "what you'll learn" and "prerequisites" sections for AI context +- Structure troubleshooting sections with clear problem-solution pairs +- Use standard markdown formatting that AI crawlers can parse effectively +- Include relevant technical keywords naturally throughout the content +- Write comprehensive summaries that AI can use as content overviews +- Ensure each section can stand alone as a coherent piece of information +- Use clear, declarative statements rather than implied or contextual references diff --git a/.wordlist.txt b/.wordlist.txt index 3111457920..6c185d8c42 100644 --- a/.wordlist.txt +++ b/.wordlist.txt @@ -4978,3 +4978,65 @@ multidisks testsh uops subgraph +ArgumentList +Autocannon +Buildkite +CimInstance +ClassName +DischargeRate +FPM +FastCGI +FilePath +Halide’s +HasExited +ImportError +NIC’s +NVM +NodeJS +Opcache +OpenBMC’s +PHPBench +ParentProcessId +PassThru +ProcessID +QAT +REPL +RaceNight +RemainingCapacity +Ruifeng +Xdebug +appPid +argList +autocannon +autoexit +autoloading +benchArrayPush +benchStringConcat +buildkite +childPid +childProcess +cmdLine +eq +exePath +ffplay +fpm +gpl +hh +mW +mWh +mbstring +memPriv +npmjs +opcache +outHead +outLine +outputFile +phar +phpbench +phpinfo +utf +vesion +wwwrun +xdebug +zoneIdentifier +zypper \ No newline at end of file diff --git a/content/install-guides/multipass.md b/content/install-guides/multipass.md index d1ef439ccc..dc4fdb2f91 100644 --- a/content/install-guides/multipass.md +++ b/content/install-guides/multipass.md @@ -53,7 +53,7 @@ Multipass uses the terms virtual machine and instance synonymously. Download Multipass for macOS. ```console -wget https://github.com/canonical/multipass/releases/download/v1.16.0/multipass-1.16.0+mac-Darwin.pkg +wget https://github.com/canonical/multipass/releases/download/v1.16.1/multipass-1.16.1+mac-Darwin.pkg ``` ### How do I install Multipass on macOS? @@ -61,7 +61,7 @@ wget https://github.com/canonical/multipass/releases/download/v1.16.0/multipass- Install the download using the package command. ```console -sudo installer -pkg multipass-1.16.0+mac-Darwin.pkg -target / +sudo installer -pkg multipass-1.16.1+mac-Darwin.pkg -target / ``` The getting started instructions below use the command line interface. If you prefer to use the graphical interface start it from the macOS Launchpad, the initial screen is shown below. You can use the UI to create, start, and stop virtual machines. @@ -112,7 +112,7 @@ HINT: sudo /usr/sbin/kvm-ok If KVM is available, proceed with the install. -### How do I install the Sanp daemon on Arm Linux? +### How do I install the Snap daemon on Arm Linux? You may need to install the Snap daemon, `snapd`, before installing Multipass. @@ -130,8 +130,6 @@ If you need to install `snapd` run: sudo apt install snapd -y ``` - - {{% notice Note %}} You can select from three Multipass releases: stable, beta, or edge. The default version is stable. Add `--beta` or `--edge` to the install command below to select these more recent versions. @@ -166,25 +164,26 @@ Multipass runs Ubuntu images. The last three LTS (long-term support) versions ar To see the available images run the `find` command. Any of the listed images can be used to create a new instance. ```bash -sudo multipass find +multipass find ``` + The output from `find` will be similar to the below. ```output Image Aliases Version Description -20.04 focal 20240821 Ubuntu 20.04 LTS -22.04 jammy 20241002 Ubuntu 22.04 LTS -24.04 noble,lts 20241004 Ubuntu 24.04 LTS -daily:24.10 oracular,devel 20241009 Ubuntu 24.10 +22.04 jammy 20251001 Ubuntu 22.04 LTS +24.04 noble,lts 20251001 Ubuntu 24.04 LTS +25.04 plucky 20251003 Ubuntu 25.04 +daily:25.10 questing,devel 20251015 Ubuntu 25.10 -Blueprint Aliases Version Description +Blueprint (deprecated) Aliases Version Description anbox-cloud-appliance latest Anbox Cloud Appliance charm-dev latest A development and testing environment for charmers docker 0.4 A Docker environment with Portainer and related tools jellyfin latest Jellyfin is a Free Software Media System that puts you in control of managing and streaming your media. minikube latest minikube is local Kubernetes -ros-noetic 0.1 A development and testing environment for ROS Noetic. ros2-humble 0.1 A development and testing environment for ROS 2 Humble. +ros2-jazzy 0.1 A development and testing environment for ROS 2 Jazzy. ``` ### How do I launch a Multipass instance? diff --git a/content/install-guides/pytorch.md b/content/install-guides/pytorch.md index 9742b36d2a..4701be399b 100644 --- a/content/install-guides/pytorch.md +++ b/content/install-guides/pytorch.md @@ -49,6 +49,7 @@ PyTorch requires Python 3, and this can be installed with `pip`. For Ubuntu, run: ```bash +sudo apt update sudo apt install python-is-python3 python3-pip python3-venv -y ``` @@ -71,7 +72,7 @@ source venv/bin/activate In your active virtual environment, install PyTorch: ```bash -sudo pip install torch torchvision torchaudio +pip install torch torchvision torchaudio ``` ## How do I get started with PyTorch? diff --git a/content/learning-paths/embedded-and-microcontrollers/_index.md b/content/learning-paths/embedded-and-microcontrollers/_index.md index e5d9c4f74d..694dc9cd6d 100644 --- a/content/learning-paths/embedded-and-microcontrollers/_index.md +++ b/content/learning-paths/embedded-and-microcontrollers/_index.md @@ -52,7 +52,7 @@ tools_software_languages_filter: - DSTREAM: 2 - Edge AI: 2 - Edge Impulse: 1 -- ExecuTorch: 3 +- ExecuTorch: 4 - FastAPI: 1 - FPGA: 1 - Fusion 360: 1 diff --git a/content/learning-paths/embedded-and-microcontrollers/rpi-llama3/_index.md b/content/learning-paths/embedded-and-microcontrollers/rpi-llama3/_index.md index 1e47bb2d34..9931e67a2c 100644 --- a/content/learning-paths/embedded-and-microcontrollers/rpi-llama3/_index.md +++ b/content/learning-paths/embedded-and-microcontrollers/rpi-llama3/_index.md @@ -32,6 +32,7 @@ tools_software_languages: - Generative AI - Raspberry Pi - Hugging Face + - ExecuTorch diff --git a/content/learning-paths/laptops-and-desktops/_index.md b/content/learning-paths/laptops-and-desktops/_index.md index b0c3d43298..aa493bf3dd 100644 --- a/content/learning-paths/laptops-and-desktops/_index.md +++ b/content/learning-paths/laptops-and-desktops/_index.md @@ -11,11 +11,11 @@ operatingsystems_filter: - ChromeOS: 2 - Linux: 34 - macOS: 9 -- Windows: 45 +- Windows: 46 subjects_filter: - CI-CD: 5 - Containers and Virtualization: 7 -- Migration to Arm: 29 +- Migration to Arm: 30 - ML: 2 - Performance and Architecture: 27 subtitle: Create and migrate apps for power efficient performance @@ -38,6 +38,7 @@ tools_software_languages_filter: - CSS: 1 - Daytona: 1 - Docker: 5 +- FFmpeg: 1 - GCC: 12 - Git: 1 - GitHub: 3 @@ -62,6 +63,7 @@ tools_software_languages_filter: - ONNX Runtime: 1 - OpenCV: 1 - perf: 4 +- PowerShell: 1 - Python: 6 - QEMU: 1 - Qt: 2 diff --git a/content/learning-paths/laptops-and-desktops/win-resource-ps1/_index.md b/content/learning-paths/laptops-and-desktops/win-resource-ps1/_index.md index 3e37b6e743..93ea7d9143 100644 --- a/content/learning-paths/laptops-and-desktops/win-resource-ps1/_index.md +++ b/content/learning-paths/laptops-and-desktops/win-resource-ps1/_index.md @@ -11,12 +11,12 @@ who_is_this_for: This is an introductory topic for developers who want to measur learning_objectives: - Run video encode and decode tasks by using FFmpeg - - Benchmark video encode task - - Sample CPU / memory / power usage of video decode task + - Benchmark the video encode task + - Sample CPU, memory, and power usage for the video decode task prerequisites: - A Windows on Arm computer such as the Lenovo Thinkpad X13s running Windows 11 - - Any code editor. [Visual Studio Code for Arm64](https://code.visualstudio.com/docs/?dv=win32arm64user) is suitable. + - A code editor such as [Visual Studio Code for Windows on Arm](https://code.visualstudio.com/docs/?dv=win32arm64user) author: Ruifeng Wang diff --git a/content/learning-paths/laptops-and-desktops/win-resource-ps1/how-to-1.md b/content/learning-paths/laptops-and-desktops/win-resource-ps1/how-to-1.md index e07941f6d1..0b0e8b6856 100644 --- a/content/learning-paths/laptops-and-desktops/win-resource-ps1/how-to-1.md +++ b/content/learning-paths/laptops-and-desktops/win-resource-ps1/how-to-1.md @@ -1,5 +1,5 @@ --- -title: Application and data set +title: Set up FFmpeg and encode a test video weight: 2 ### FIXED, DO NOT MODIFY @@ -7,41 +7,63 @@ layout: learningpathall --- ## Overview -System resource usage provides an approach to understand the performance of an application as a black box. This Learning Path demonstrates how to sample system resource usage by using a script. +System resource usage provides an approach to understand the performance of an application as a black box. This Learning Path demonstrates how to sample system resource usage using a script. -The application used is FFmpeg. It is a tool set that performs video encode and decode tasks. We will run the same tests with both x86_64 binary (through emulation) and Arm64 native binary. +The example application you will use is FFmpeg, a tool set that performs video encode and decode tasks. You will run the same tests with both the x86_64 binary (using Windows instruction emulation) and the Arm64 native binary on a Windows on Arm computer. ## Application -Binary builds are available. You don't need to build them from source. Download executable files for Windows: +Binary builds of FFmpeg are available, so you don't need to build them from source. -1. Download [x86_64 package](https://github.com/BtbN/FFmpeg-Builds/releases/download/autobuild-2025-07-31-14-15/ffmpeg-n7.1.1-56-gc2184b65d2-win64-gpl-7.1.zip). -2. Download [Arm64 native package](https://github.com/BtbN/FFmpeg-Builds/releases/download/autobuild-2025-07-31-14-15/ffmpeg-n7.1.1-56-gc2184b65d2-winarm64-gpl-7.1.zip). +To get started: -Unzip the downloaded packages. You can find the binaries in **bin** folder. Note paths to **ffmpeg.exe** and **ffplay.exe**. They are used in later steps. +1. Download the [FFmpeg x86_64 package](https://github.com/BtbN/FFmpeg-Builds/releases/download/autobuild-2025-07-31-14-15/ffmpeg-n7.1.1-56-gc2184b65d2-win64-gpl-7.1.zip). + +2. Download the [FFmpeg Arm64 native package](https://github.com/BtbN/FFmpeg-Builds/releases/download/autobuild-2025-07-31-14-15/ffmpeg-n7.1.1-56-gc2184b65d2-winarm64-gpl-7.1.zip). + +3. Unzip the downloaded packages. + +You can find the binaries in the `bin` folder. + +{{% notice Note %}} +Make note of the paths to both versions of `ffmpeg.exe` and `ffplay.exe`, so you can run each one and compare the results. +{{% /notice %}} ## Video source -Download test video [RaceNight](https://ultravideo.fi/video/RaceNight_3840x2160_50fps_420_8bit_YUV_RAW.7z) from a public dataset. Unzip the package and note path to the uncompressed yuv file. +Download the test video [RaceNight](https://ultravideo.fi/video/RaceNight_3840x2160_50fps_420_8bit_YUV_RAW.7z) from a public dataset. + +Unzip the package and note the path to the uncompressed `.yuv` file. ## Video encoding -The downloaded video file is in yuv raw format. It means playback of the video file involves no decoding effort. You need to encode the raw video with some compressing algorithms to add computation pressure at playback. +The downloaded video file is in YUV raw format, which means playback of the video file involves no decoding effort. You need to encode the raw video with compression algorithms to add computation pressure during playback. + +Use `ffmpeg.exe` to compress the YUV raw video with the x265 encoder and convert the file format to `.mp4`. + +Assuming you downloaded the files and extracted them in the current directory, open a terminal and run the following command: -Use **ffmpeg.exe** to compress the yuv raw video with x265 algorithm and convert file format to mp4. Open a terminal and run command: ```console -path\to\ffmpeg.exe -f rawvideo -pix_fmt yuv420p -s 3840x2160 -r 50 -i D:\path\to\RaceNight_YUV_RAW\RaceNight_3840x2160_50fps_8bit.yuv -vf scale=1920:1080 -c:v libx265 -preset medium -crf 20 D:\RaceNight_1080p.mp4 -benchmark -stats -report +ffmpeg-n7.1.1-56-gc2184b65d2-win64-gpl-7.1\ffmpeg-n7.1.1-56-gc2184b65d2-win64-gpl-7.1\bin\ffmpeg.exe -f rawvideo -pix_fmt yuv420p -s 3840x2160 -r 50 -i RaceNight_3840x2160_50fps_420_8bit_YUV_RAW\RaceNight_3840x2160_50fps_8bit.yuv -vf scale=1920:1080 -c:v libx265 -preset medium -crf 20 RaceNight_1080p.mp4 -benchmark -stats -report ``` {{% notice Note %}} -Modify the paths to `ffmpeg.exe` and yuv raw video file accordingly. +Modify the paths to `ffmpeg.exe` and the YUV raw video file to match your locations. {{% /notice %}} -The command transforms video size, compresses the video into a H.265 encoded mp4 file. `benchmark` option is turned on to show performance data at the same time. The generated file is at D:\RaceNight_1080p.mp4. +The command transforms the video size and compresses the video into an MP4 file using H.265 encoding (via the x265 encoder). + +The `benchmark` option is turned on to show performance data at the same time. + +The generated file will be at RaceNight_1080p.mp4. + +Run the command with both the x86_64 and the Arm64 versions of FFmpeg and compare the output. ### View results -Shown below is example output from running x86_64 version ffmpeg.exe: + +The output below is from the x86_64 version of `ffmpeg.exe`: + ```output x265 [info]: tools: rd=3 psy-rd=2.00 early-skip rskip mode=1 signhide tmvp x265 [info]: tools: b-intra strong-intra-smoothing lslices=6 deblock sao -Output #0, mp4, to 'D:\RaceNight_1080p.mp4': +Output #0, mp4, to 'RaceNight_1080p.mp4': Metadata: encoder : Lavf61.7.100 Stream #0:0: Video: hevc (hev1 / 0x31766568), yuv420p(tv, progressive), 1920x1080, q=2-31, 50 fps, 12800 tbn @@ -61,11 +83,12 @@ x265 [info]: Weighted P-Frames: Y:0.0% UV:0.0% encoded 600 frames in 71.51s (8.39 fps), 9075.96 kb/s, Avg QP:27.27 ``` -Example output from running Arm64 native ffmpeg.exe: +The output below is from the Arm64 native compiled `ffmpeg.exe`: + ```output x265 [info]: tools: rd=3 psy-rd=2.00 early-skip rskip mode=1 signhide tmvp x265 [info]: tools: b-intra strong-intra-smoothing lslices=6 deblock sao -Output #0, mp4, to 'D:\RaceNight_1080p.mp4': +Output #0, mp4, to 'RaceNight_1080p.mp4': Metadata: encoder : Lavf61.7.100 Stream #0:0: Video: hevc (hev1 / 0x31766568), yuv420p(tv, progressive), 1920x1080, q=2-31, 50 fps, 12800 tbn @@ -83,4 +106,8 @@ x265 [info]: frame B: 451, Avg QP:28.40 kb/s: 5878.38 x265 [info]: Weighted P-Frames: Y:0.0% UV:0.0% encoded 600 frames in 26.20s (22.90 fps), 9110.78 kb/s, Avg QP:27.23 -``` \ No newline at end of file +``` + +The last line of each output shows the run time and the frames per second for each build of FFmpeg. + +Continue to learn how to track resource usage and compare each version. \ No newline at end of file diff --git a/content/learning-paths/laptops-and-desktops/win-resource-ps1/how-to-2.md b/content/learning-paths/laptops-and-desktops/win-resource-ps1/how-to-2.md index 50708a2e35..0d6f7edfd7 100644 --- a/content/learning-paths/laptops-and-desktops/win-resource-ps1/how-to-2.md +++ b/content/learning-paths/laptops-and-desktops/win-resource-ps1/how-to-2.md @@ -1,19 +1,21 @@ --- -title: Tracking system resource +title: Track system resources weight: 3 ### FIXED, DO NOT MODIFY layout: learningpathall --- -## Sampling video decoding resource usage -A PowerShell script does all the work. It launches the video decoding task, samples CPU and memory usage, and outputs sampled data to a file with format. +## Sample video decoding resource usage + +To monitor resource usage during video decoding, use the following PowerShell script. This script starts the decoding process, periodically records CPU and memory statistics, and saves the results to a CSV file for analysis. + +Open your code editor, copy the content below, and save it as `sample_decoding.ps1`. -Open your code editor, copy content below and save it as `sample_decoding.ps1`. ```PowerShell { line_numbers = true } param ( - [string]$exePath = "path\to\ffplay.exe", - [string[]]$argList = @("-loop", "15", "-autoexit", "D:\RaceNight_1080p.mp4"), + [string]$exePath = "ffmpeg-n7.1.1-56-gc2184b65d2-win64-gpl-7.1\ffmpeg-n7.1.1-56-gc2184b65d2-win64-gpl-7.1\bin\ffplay.exe", + [string[]]$argList = @("-loop", "15", "-autoexit", "RaceNight_1080p.mp4"), [int]$interval = 2, [string]$outputFile = "usage_log.csv" ) @@ -104,32 +106,35 @@ while (-not $process.HasExited) { } ``` -{{% notice Note %}} -Modify the path to `ffplay.exe` on line 2 accordingly. -{{% /notice %}} +Before you run the script, modify the path to `ffplay.exe` on line 2 to match your installation location. Run the script: + ```console Set-ExecutionPolicy -Scope Process RemoteSigned .\sample_decoding.ps1 ``` -A video starts playing. It ends in 3 minutes. And then you can find the sample results file **usage_log.csv** in current directory. + +A video starts playing and completes in 3 minutes. When finished, you can find the results file `usage_log.csv` in the current directory. {{% notice Note %}} -Script execution can be blocked due to policy configuration. The `Set-ExecutionPolicy` line allows local script to run during this session. +Script execution may be blocked due to security policy configuration. The `Set-ExecutionPolicy` command allows local scripts to run during this session. {{% /notice %}} -### Script explained -The `param` section defines variables including binary path, video playback arguments, sampling interval and result file path. +### Script explanation + +The `param` section defines variables including the binary path, video playback arguments, sampling interval, and result file path. You can modify these values as needed. + +Lines 15-26 check and modify the binary file attributes. The binaries in use are downloaded from the web and may be blocked from running due to lack of digital signature. These lines unlock the binaries. -Line 15 - Line 26 check and modify binary file attribute. The binaries in use are downloaded from the web. They can be blocked to run due to lack of signature. These lines unlock the binaries. +Line 41 retrieves all child processes of the main process. The statistical data includes resources used by all processes spawned by the main process. -Line 41 gets all the child processes of the main process. The statistic data include resources used by all the processes spawned by the main process. +The `while` section collects CPU and memory usage periodically until the application exits. The CPU usage represents accumulated time that the process runs on the CPU. The memory usage shows the size of memory occupation with or without shared spaces accounted for. -The `while` setction collects processes' CPU and memory usage periodically until the application exits. The CPU usage is accumulated time length that the process runs on CPU. And the memory usage is size of memory occupation with or without shared spaces accounted. +### View results + +The output below shows the results from running the x86_64 version of `ffplay.exe`: -### View result -Shown below is example sample result from running x86_64 version ffplay.exe: ```output Timestamp,CPU Sum (s),Memory Sum (MB),Memory Private Sum (MB),CPU0 (s),Memory0 (MB),Memory Private0 (MB),CPU1 (s),Memory1 (MB),Memory Private1 (MB) 2025-08-18T10:40:12.3480939+08:00,3.6875,378.65,342.16,3.671875,366.3515625,340.33984375,0.015625,12.296875,1.82421875 @@ -137,7 +142,8 @@ Timestamp,CPU Sum (s),Memory Sum (MB),Memory Private Sum (MB),CPU0 (s),Memory0 ( 2025-08-18T10:43:09.7262439+08:00,396.375,391.71,355.00,396.359375,379.453125,353.2421875,0.015625,12.2578125,1.7578125 ``` -Example result from running Arm64 native ffplay.exe: +The output below shows the results from running the Arm64 version of `ffplay.exe`: + ```output Timestamp,CPU Sum (s),Memory Sum (MB),Memory Private Sum (MB),CPU0 (s),Memory0 (MB),Memory Private0 (MB),CPU1 (s),Memory1 (MB),Memory Private1 (MB) 2025-08-18T10:36:04.3654823+08:00,3.296875,340.51,328.17,3.28125,328.18359375,326.359375,0.015625,12.32421875,1.8125 @@ -145,4 +151,4 @@ Timestamp,CPU Sum (s),Memory Sum (MB),Memory Private Sum (MB),CPU0 (s),Memory0 ( 2025-08-18T10:39:01.7856168+08:00,329.109375,352.53,339.96,329.09375,340.23046875,338.20703125,0.015625,12.30078125,1.75390625 ``` -The sample result file is in **csv** format. You can open it with spreadsheet applications like Microsoft Excel for a better view and plot lines for data analysis. +The sample result file uses CSV (comma-separated values) format. You can open it with spreadsheet applications like Microsoft Excel for better visualization and create charts for data analysis. diff --git a/content/learning-paths/laptops-and-desktops/win-resource-ps1/how-to-3.md b/content/learning-paths/laptops-and-desktops/win-resource-ps1/how-to-3.md index 8365c33c66..d4494f3247 100644 --- a/content/learning-paths/laptops-and-desktops/win-resource-ps1/how-to-3.md +++ b/content/learning-paths/laptops-and-desktops/win-resource-ps1/how-to-3.md @@ -1,5 +1,5 @@ --- -title: Measuring power usage +title: Measure power usage weight: 4 ### FIXED, DO NOT MODIFY @@ -7,15 +7,17 @@ layout: learningpathall --- ## Sampling battery status -Querying battery status provides a way to measure power usage without an external power meter. It is also handy in that data collection and logging can be automatic. -A PowerShell script does all the work. It launches the video decoding task, samples battery status, and outputs sampled data to a file with format. +Querying battery status provides a way to measure power usage without an external power meter. Battery monitoring is also convenient because data collection and logging can be automated. + +A PowerShell script launches the video decoding task, samples battery status, and outputs the collected data to a CSV formatted file. + +Open your code editor, copy the content below, and save it as `sample_power.ps1`: -Open your code editor, copy content below and save it as `sample_power.ps1`. ```PowerShell { line_numbers = true } param ( - [string]$exePath = "path\to\ffplay.exe", - [string[]]$argList = @("-loop", "150", "-autoexit", "D:\RaceNight_1080p.mp4"), + [string]$exePath = "ffmpeg-n7.1.1-56-gc2184b65d2-win64-gpl-7.1\ffmpeg-n7.1.1-56-gc2184b65d2-win64-gpl-7.1\bin\ffplay.exe", + [string[]]$argList = @("-loop", "15", "-autoexit", "RaceNight_1080p.mp4"), [int]$interval = 10, [string]$outputFile = "watts.csv" ) @@ -63,20 +65,22 @@ while (-not $process.HasExited) { } ``` -{{% notice Note %}} -Modify the path to `ffplay.exe` on line 2 accordingly. -{{% /notice %}} +Before you run the script, modify the path to `ffplay.exe` on line 2 to match your installation location. + +The battery data is system-based and process-agnostic. Fully charge the battery, close any unnecessary applications, unplug the power cord, and run the script: -The battery data is system based and process agnostic. Full charge the battery. Close any unnecessary applications. Unplug the power cord. And run the script: ```console .\sample_power.ps1 ``` -A video starts playing. It ends in 30 minutes. And then you can find the sample results file **watts.csv** in current directory. The test runs for a longer time so you can observe a distinct battery remaining capacity drop. -The script collects battery remaining capacity and discharge rate periodically. You can track the battery remaining capacity to have an understanding of the power consumption. +A video starts playing and completes in 30 minutes. When finished, you can find the results file `watts.csv` in the current directory. The test runs for a longer duration so you can observe a distinct drop in battery remaining capacity. + +The script collects remaining battery capacity and discharge rate periodically. You can track the battery remaining capacity to understand the power consumption patterns. + +### View results + +The output below shows the results from running the x86_64 version of `ffplay.exe`: -### View result -Shown below is example sample result from running x86_64 version ffplay.exe: ```output Timestamp,RemainingCapacity(mWh),DischargeRate(mW) 2025-08-15T14:42:50.5231628+08:00,48438,4347 @@ -84,7 +88,8 @@ Timestamp,RemainingCapacity(mWh),DischargeRate(mW) 2025-08-15T15:12:38.2028188+08:00,43823,8862 ``` -Example result from running Arm64 native ffplay.exe: +The output below shows the results from running the Arm64 version of `ffplay.exe`: + ```output Timestamp,RemainingCapacity(mWh),DischargeRate(mW) 2025-08-15T15:53:05.8430758+08:00,48438,3255 @@ -92,4 +97,6 @@ Timestamp,RemainingCapacity(mWh),DischargeRate(mW) 2025-08-15T16:22:55.3163530+08:00,44472,7319 ``` -The sample result file is in **csv** format. You can open it with spreadsheet applications like Microsoft Excel for a better view and plot lines for data analysis. +The sample results file is in CSV format. You can open it with spreadsheet applications like Microsoft Excel for better visualization and to plot data analysis charts. + +Battery monitoring provides an effective way to measure power consumption differences between x86_64 and native Arm64 applications. By comparing discharge rates, you can quantify the power efficiency advantages that Arm processors typically demonstrate for video decoding workloads. diff --git a/content/learning-paths/laptops-and-desktops/win11-vm-automation/_index.md b/content/learning-paths/laptops-and-desktops/win11-vm-automation/_index.md index e6ca39d6f4..65a17a153f 100644 --- a/content/learning-paths/laptops-and-desktops/win11-vm-automation/_index.md +++ b/content/learning-paths/laptops-and-desktops/win11-vm-automation/_index.md @@ -1,22 +1,18 @@ --- -title: Windows on Arm virtual machine creation using Arm Linux, QEMU, and KVM - -draft: true -cascade: - draft: true +title: Automate Windows on Arm virtual machine deployment with QEMU and KVM on Arm Linux minutes_to_complete: 90 -who_is_this_for: This is for developers and system administrators who want to automate Windows on Arm virtual machine (VM) creation on Arm Linux systems using QEMU and KVM. +who_is_this_for: This is an introductory topic for developers and system administrators who want to automate Windows on Arm virtual machine (VM) creation on Arm Linux systems using QEMU and KVM. learning_objectives: - - Understand the process of creating Windows on Arm virtual machine using Bash scripts. - - Run scripts for VM creation and management. - - Troubleshoot common VM setup and runtime issues. - - Use Windows on Arm virtual machines for software development and testing. + - Understand the process of creating a Windows on Arm virtual machine using Bash scripts + - Run scripts for VM creation and management + - Troubleshoot common VM setup and runtime issues + - Use Windows on Arm virtual machines for software development and testing prerequisites: - - An Arm Linux system with KVM support and a minimum of 8GB RAM and 50GB free disk space. + - An Arm Linux system with KVM support and a minimum of 8GB RAM and 50GB free disk space author: Jason Andrews diff --git a/content/learning-paths/laptops-and-desktops/win11-vm-automation/prerequisites-1.md b/content/learning-paths/laptops-and-desktops/win11-vm-automation/prerequisites-1.md index 47b42a4298..3261fc7983 100644 --- a/content/learning-paths/laptops-and-desktops/win11-vm-automation/prerequisites-1.md +++ b/content/learning-paths/laptops-and-desktops/win11-vm-automation/prerequisites-1.md @@ -1,26 +1,24 @@ --- -title: System requirements +title: Check system requirements weight: 2 ### FIXED, DO NOT MODIFY layout: learningpathall --- -If you are building and testing Windows on Arm software you have a variety of options to run Windows on Arm. You can use local laptops, cloud virtual machines, and CI/CD platforms like GitHub Actions for development tasks. -You can also use a local Arm Linux server to create virtual machines for Windows on Arm software development tasks. This Learning Path explains how to install and use Windows on Arm virtual machines on an Arm Linux system. Two scripts are provided to create and run Windows on Arm virtual machines to make the process easy. +## Prepare your system for Windows on Arm virtual machines -Before creating a Windows on Arm virtual machine, ensure your Arm Linux system meets the hardware and software requirements. This section covers everything you need to prepare to create a Windows on Arm virtual machine using QEMU and KVM. +To build and test Windows on Arm software, choose from several flexible options: run Windows on Arm locally, use cloud-based virtual machines, or leverage CI/CD platforms like GitHub Actions. For hands-on development, set up a Windows on Arm virtual machine directly on your Arm Linux server. -## Hardware requirements +In this Learning Path, you'll install and use Windows on Arm virtual machines on an Arm Linux system. Two easy-to-use scripts are included to streamline the creation and management of your virtual machines. Before you get started, make sure your Arm Linux system meets the hardware and software requirements. In this section, you'll set up everything you need to create a Windows on Arm virtual machine using QEMU and KVM. -You need an Arm Linux system with enough performance, memory, and storage to run a Windows on Arm virtual machine. +## Check hardware requirements -The provided scripts have been tested on a [Thelio Astra](https://system76.com/desktops/thelio-astra-a1.1-n1/configure?srsltid=AfmBOoplXbwXifyxppxFe_oyahYMJHUT0bp2BnIBSH5ADjqgZxB7wW75) running Ubuntu 24.04. +You need an Arm Linux system with enough performance, memory, and storage to run a Windows on Arm virtual machine. You can use the scripts on a [Thelio Astra](https://system76.com/desktops/thelio-astra-a1.1-n1/configure?srsltid=AfmBOoplXbwXifyxppxFe_oyahYMJHUT0bp2BnIBSH5ADjqgZxB7wW75) running Ubuntu 24.04, where they have been tested successfully. Thelio Astra is an Arm-based desktop computer designed by System76 for autonomous vehicle development and other general-purpose Arm software development. It uses the Ampere Altra processor, which is based on the Arm Neoverse N1 CPU, and ships with the Ubuntu operating system. - -Other Arm Linux systems and other Linux distributions are possible, but have not been tested. General hardware requirements are listed below. +You can try these scripts on other Arm Linux systems or distributions, but only the configuration above has been tested. Check the general hardware requirements below before you continue. The minimum hardware requirements for the Arm Linux system are: @@ -28,15 +26,17 @@ The minimum hardware requirements for the Arm Linux system are: - 8 GB RAM - 50 GB free disk space -The scripts automatically allocate resources as listed below, but the details can be customized for your system. +Customize CPU cores, memory, and disk size by editing the variables at the top of each script file (`create-vm.sh` and `run-vm.sh` in the project directory) to match your system's capabilities. + +For this Learning Path, add the following information: - CPU: half of available cores (minimum 4 cores) - Memory: half of available RAM (minimum 4 GB) - Disk: 40 GB VM disk -## KVM support +## Verify KVM support -Kernel-based Virtual Machine (KVM) support is required for hardware-accelerated virtualization and good VM performance. +Kernel-based Virtual Machine (KVM) support is required for hardware-accelerated virtualization and optimal virtual machine (VM) performance on Arm systems. Without KVM, your VMs run significantly slower because they rely on software emulation instead of using Arm's hardware virtualization features. KVM is a virtualization infrastructure built into the Linux kernel that allows you to run virtual machines with near-native performance. It leverages Arm's hardware virtualization extensions to provide efficient CPU virtualization, while QEMU handles device emulation and management. Without KVM, virtual machines run much slower using software emulation. @@ -59,7 +59,7 @@ This confirms that: - The KVM kernel module is loaded - The `/dev/kvm` device exists -## Required software +## Install required software The scripts require several software packages. @@ -70,11 +70,13 @@ sudo apt update sudo apt install qemu-system-arm qemu-utils genisoimage wget curl jq uuid-runtime -y ``` -If needed, the [Remmina](https://remmina.org/) remote desktop (RDP) client is automatically installed by the run script so you don't need to install it now, but you can install it using the command below. +If needed, the [Remmina](https://remmina.org/) remote desktop (RDP) client is automatically installed by the run script so you don't need to install it now, but you can install it using this command: ```console sudo apt install remmina remmina-plugin-rdp -y ``` -Proceed to the next section to learn about the scripts. + +You’ve verified your system requirements and you’re now ready to move on and start working with Windows on Arm virtual machines. + diff --git a/content/learning-paths/laptops-and-desktops/win11-vm-automation/understanding-scripts-2.md b/content/learning-paths/laptops-and-desktops/win11-vm-automation/understanding-scripts-2.md index b6103b6733..12a8d4a8d5 100644 --- a/content/learning-paths/laptops-and-desktops/win11-vm-automation/understanding-scripts-2.md +++ b/content/learning-paths/laptops-and-desktops/win11-vm-automation/understanding-scripts-2.md @@ -1,11 +1,13 @@ --- -title: Understanding the virtual machine scripts +title: Understand and customize Windows on Arm VM automation scripts weight: 3 layout: "learningpathall" --- +## Get started with the Windows on Arm VM automation scripts + A GitHub project provides two Bash scripts. Understanding their architecture and design will help you use them effectively and enable you to customize the options for your specific needs. Start by cloning the project repository from GitHub to your Arm Linux system. @@ -47,15 +49,15 @@ Virtual machine creation includes the following steps: The `create-win11-vm.sh` script implements a four-step process that builds a Windows VM incrementally: -### Step 1: Create VM directory +### Step 1: Create the VM directory Step 1 initializes the VM directory structure and configuration. It creates the VM directory, copies initial configuration files, and sets up the basic environment. As a result, the VM directory, configuration files, and connection profiles are created. -### Step 2: Download Windows +### Step 2: Download Windows and VirtIO drivers Step 2 downloads the Windows 11 ISO and VirtIO drivers. It downloads the Windows 11 Arm ISO from Microsoft, fetches VirtIO drivers, and prepares unattended installation files. The files created during this step include `installer.iso`, `virtio-win.iso`, and the unattended installation directory. This step takes some time as the Windows ISO download is large, but if you already have the file the script will save time and not repeat the download. -### Step 3: Prepare VM disk image +### Step 3: Prepare the VM disk image Step 3 creates the VM disk image and finalizes the installation setup. It builds the unattended installation ISO, creates the main VM disk image, and configures all installation media. The files created during this step include `disk.qcow2` and `unattended.iso`. @@ -63,7 +65,7 @@ Step 3 creates the VM disk image and finalizes the installation setup. It builds The product key used in the scripts is a generic key provided by Microsoft, which allows installation. This key is for testing purposes only and does not activate Windows. If you plan to continue using Windows beyond installation, you should replace it with a genuine product key. {{% /notice %}} -### Step 4: First Windows boot +### Step 4: Boot Windows for the first time Step 4 executes the Windows installation. It boots the VM with installation media, runs the automated Windows setup, and completes the initial configuration. The result is a fully installed and configured Windows on Arm VM. @@ -93,7 +95,7 @@ For storage, the default VM disk size is 40GB in QCOW2 format. The available dis All settings are customizable using command line arguments. -## Script Integration and Workflow +## Script integration and workflow The create and run scripts share the same configuration files. Separating creation from execution enables you to create a VM once and then use the run script repeatedly. diff --git a/content/learning-paths/laptops-and-desktops/win11-vm-automation/vm-creation-3.md b/content/learning-paths/laptops-and-desktops/win11-vm-automation/vm-creation-3.md index e5572a56ed..40183abb46 100644 --- a/content/learning-paths/laptops-and-desktops/win11-vm-automation/vm-creation-3.md +++ b/content/learning-paths/laptops-and-desktops/win11-vm-automation/vm-creation-3.md @@ -78,11 +78,12 @@ Set up a VM with English International language: ./create-win11-vm.sh all $HOME/win11-vm --language "English International" ``` + ## Alternative four-step creation process -The VM creation process consists of four distinct steps that can be run individually. Understanding each step helps with troubleshooting and customization. +You can run each step of the VM creation process individually. Understanding each step helps with troubleshooting and customization. -### Step 1: Create VM directory structure +### Step 1: Create the VM directory structure ```console ./create-win11-vm.sh create $HOME/win11-vm @@ -151,7 +152,7 @@ The script uses an automated process to download Windows 11 from Microsoft's off 3. Obtain download link - Retrieves the direct download URL for Arm64 4. Download and verify - Downloads the ISO and verifies its integrity -### Step 3: Prepare VM disk +### Step 3: Prepare the VM disk ```console ./create-win11-vm.sh prepare $HOME/win11-vm @@ -175,7 +176,7 @@ The script creates a QCOW2 disk image with these optimizations: Important Note: If `disk.qcow2` already exists, the script will warn you that proceeding will delete the existing VM's hard drive and start over with a clean installation. -### Step 4: First boot and Windows installation +### Step 4: Boot and install Windows for the first time ```console ./create-win11-vm.sh firstboot $HOME/win11-vm diff --git a/content/learning-paths/laptops-and-desktops/win11-vm-automation/vm-execution-4.md b/content/learning-paths/laptops-and-desktops/win11-vm-automation/vm-execution-4.md index e05656d100..d0449dfd63 100644 --- a/content/learning-paths/laptops-and-desktops/win11-vm-automation/vm-execution-4.md +++ b/content/learning-paths/laptops-and-desktops/win11-vm-automation/vm-execution-4.md @@ -12,11 +12,17 @@ After your Windows 11 Arm VM is created, launching it is simple with the unified ./run-win11-vm.sh $HOME/win11-vm ``` -This single command handles the entire VM startup and connection process automatically. The script performs three key steps: checks if the VM is already running, starts it in headless mode if needed, and connects you via RDP using Remmina. +This single command handles the entire VM startup and connection process automatically. + +The script performs three key steps. It does the following: + +- Checks if the VM is already running +- Starts the VM in headless mode if required +- Connects you through RDP using Remmina When the virtual machine starts you will see it on your Linux desktop: -![Windows on Arm VM](./images/win11arm.png) +![Screenshot showing the Windows 11 desktop running in a virtual machine on an Arm-based Linux system. The Windows Start menu and taskbar are visible, confirming successful VM launch and RDP connection. alt-text#center](./images/win11arm.png "Windows 11 Arm VM desktop") ## What does the run script do? @@ -279,4 +285,4 @@ RDP session ended This is a known Remmina issue and does not affect VM functionality. -You have learned how to create Windows on Arm virtual machines on an Arm Linux system with QEMU and KVM. You can use these virtual machines for software development and testing. You can speedup your development tasks by using an Arm Linux desktop or server with high processor count and plenty of RAM. \ No newline at end of file +You have completed the VM execution section. You now know how to run, monitor, and manage Windows on Arm virtual machines on an Arm Linux system. Keep building your skills and explore more advanced automation or troubleshooting as your next step - great work! diff --git a/content/learning-paths/laptops-and-desktops/wsl2/backup.md b/content/learning-paths/laptops-and-desktops/wsl2/backup.md index 3dee086b20..591f54aea6 100644 --- a/content/learning-paths/laptops-and-desktops/wsl2/backup.md +++ b/content/learning-paths/laptops-and-desktops/wsl2/backup.md @@ -10,7 +10,7 @@ layout: learningpathall Use the export command to save the state of a WSL instance. ```bash -wsl --export Ubuntu-22.04 ubuntu-backup.tar +wsl --export Ubuntu-24.04 ubuntu-backup.tar ``` The tar file can be copied and imported to the same machine or to another machine. diff --git a/content/learning-paths/laptops-and-desktops/wsl2/import.md b/content/learning-paths/laptops-and-desktops/wsl2/import.md index 74c0106cb5..2792cf8f2d 100644 --- a/content/learning-paths/laptops-and-desktops/wsl2/import.md +++ b/content/learning-paths/laptops-and-desktops/wsl2/import.md @@ -43,37 +43,4 @@ wsl -d alpine1 Alpine Linux is now running in WSL. -### Install Raspberry Pi OS - -Check the [Raspberry Pi OS Downloads](http://downloads.raspberrypi.org/) - -Navigate to [raspios_lite_arm64](http://downloads.raspberrypi.org/raspios_lite_arm64/) - -Download the root filesystem named [root.tar.xz](http://downloads.raspberrypi.org/raspios_lite_arm64/root.tar.xz) - -At a Windows Command Prompt import the downloaded filesystem into WSL as a Linux distribution: - -```cmd -wsl --import rpios c:\Users\\rpios root.tar.xz -``` - -The name of the distribution will be `rpios` - -The storage will be in `C:\Users\\rpios` - -When the import is complete, list the installed distributions: - -```cmd -wsl --list -``` - -The new `rpios` distribution will be on the list. - -Start Raspberry Pi OS: - -```cmd -wsl -d rpios -``` - -Raspberry Pi OS on aarch64 is now running in WSL. diff --git a/content/learning-paths/laptops-and-desktops/wsl2/setup.md b/content/learning-paths/laptops-and-desktops/wsl2/setup.md index 7fab780174..ccb809e725 100644 --- a/content/learning-paths/laptops-and-desktops/wsl2/setup.md +++ b/content/learning-paths/laptops-and-desktops/wsl2/setup.md @@ -59,7 +59,7 @@ The operation completed successfully. ## Install a Linux distribution -Once WSL 2 is installed, the Microsoft store is the easiest place to find a Linux distribution. [Installing Ubuntu 22.04](https://apps.microsoft.com/store/detail/ubuntu-22041-lts/9PN20MSR04DW) is quick and easy from the store. +Once WSL 2 is installed, the Microsoft store is the easiest place to find a Linux distribution. [Installing Ubuntu 24.04](https://apps.microsoft.com/detail/9nz3klhxdjp5?hl=en-US&gl=US) is quick and easy from the store. There are other Linux distributions available in the Microsoft Store. Make sure to select the ones that work on Arm. Some do not work and it may be some trial-and-error to identify those that work on Arm. @@ -74,24 +74,33 @@ wsl --list --online The output will list the available distributions: ```output +The following is a list of valid distributions that can be installed. +Install using 'wsl.exe --install '. + NAME FRIENDLY NAME -Ubuntu Ubuntu +AlmaLinux-8 AlmaLinux OS 8 +AlmaLinux-9 AlmaLinux OS 9 +AlmaLinux-Kitten-10 AlmaLinux OS Kitten 10 +AlmaLinux-10 AlmaLinux OS 10 Debian Debian GNU/Linux +FedoraLinux-42 Fedora Linux 42 +Ubuntu Ubuntu +Ubuntu-24.04 Ubuntu 24.04 LTS kali-linux Kali Linux Rolling -Ubuntu-18.04 Ubuntu 18.04 LTS +openSUSE-Tumbleweed openSUSE Tumbleweed +openSUSE-Leap-16.0 openSUSE Leap 16.0 Ubuntu-20.04 Ubuntu 20.04 LTS Ubuntu-22.04 Ubuntu 22.04 LTS -Ubuntu-24.04 Ubuntu 24.04 LTS -openSUSE-Tumbleweed openSUSE Tumbleweed +openSUSE-Leap-15.6 openSUSE Leap 15.6 ``` Install a distribution from this list: ```cmd -wsl --install Ubuntu +wsl --install Ubuntu-24.04 ``` -Be patient, the progress may stay on 0 for a bit. +Be patient, the download and install takes some time. After installation, each Linux distribution will have an icon on the Windows application menu. Use this icon to start WSL with the Linux distribution. @@ -154,7 +163,7 @@ wsl --list --running Terminate a running distribution: ```console -wsl --terminate Ubuntu-22.04 +wsl --terminate Ubuntu-24.04 ``` Shutdown all running distributions: @@ -166,7 +175,7 @@ wsl --shutdown Unregister the Linux distribution and delete the filesystem: ```console -wsl --unregister Ubuntu-22.04 +wsl --unregister Ubuntu-24.04 ``` Update WSL to the latest version: diff --git a/content/learning-paths/laptops-and-desktops/wsl2/wsl-linux.png b/content/learning-paths/laptops-and-desktops/wsl2/wsl-linux.png index 02ae67d8bd..98d6183671 100644 Binary files a/content/learning-paths/laptops-and-desktops/wsl2/wsl-linux.png and b/content/learning-paths/laptops-and-desktops/wsl2/wsl-linux.png differ diff --git a/content/learning-paths/mobile-graphics-and-gaming/_index.md b/content/learning-paths/mobile-graphics-and-gaming/_index.md index 0ba3f637ac..69c2e17be4 100644 --- a/content/learning-paths/mobile-graphics-and-gaming/_index.md +++ b/content/learning-paths/mobile-graphics-and-gaming/_index.md @@ -10,13 +10,13 @@ key_ip: maintopic: true operatingsystems_filter: - Android: 32 -- Linux: 30 +- Linux: 31 - macOS: 14 - Windows: 14 subjects_filter: - Gaming: 6 - Graphics: 6 -- ML: 12 +- ML: 13 - Performance and Architecture: 35 subtitle: Optimize Android apps and build faster games using cutting-edge Arm tech title: Mobile, Graphics, and Gaming @@ -34,13 +34,13 @@ tools_software_languages_filter: - Bazel: 1 - C: 4 - C#: 3 -- C++: 11 +- C++: 12 - CCA: 1 - Clang: 12 - CMake: 1 - Coding: 1 - Docker: 1 -- ExecuTorch: 1 +- ExecuTorch: 2 - Frame Advisor: 1 - GCC: 12 - Generative AI: 2 @@ -49,6 +49,7 @@ tools_software_languages_filter: - Google Test: 1 - Hugging Face: 5 - Java: 6 +- Jupyter Notebook: 1 - KleidiAI: 1 - Kotlin: 7 - LiteRT: 1 @@ -61,7 +62,7 @@ tools_software_languages_filter: - ONNX Runtime: 1 - OpenGL ES: 1 - Python: 4 -- PyTorch: 1 +- PyTorch: 2 - QEMU: 1 - RenderDoc: 1 - RME: 1 @@ -74,7 +75,7 @@ tools_software_languages_filter: - Unreal Engine: 4 - Visual Studio: 1 - Visual Studio Code: 1 -- Vulkan: 4 +- Vulkan: 5 - Vulkan SDK: 1 - XNNPACK: 1 weight: 3 diff --git a/content/learning-paths/mobile-graphics-and-gaming/build-llama3-chat-android-app-using-executorch-and-xnnpack/_index.md b/content/learning-paths/mobile-graphics-and-gaming/build-llama3-chat-android-app-using-executorch-and-xnnpack/_index.md index dc2d80493f..a0b7a89fcc 100644 --- a/content/learning-paths/mobile-graphics-and-gaming/build-llama3-chat-android-app-using-executorch-and-xnnpack/_index.md +++ b/content/learning-paths/mobile-graphics-and-gaming/build-llama3-chat-android-app-using-executorch-and-xnnpack/_index.md @@ -35,6 +35,7 @@ tools_software_languages: - C++ - Python - Hugging Face + - ExecuTorch operatingsystems: - macOS diff --git a/content/learning-paths/mobile-graphics-and-gaming/model-training-gym/1-introduction.md b/content/learning-paths/mobile-graphics-and-gaming/model-training-gym/1-introduction.md index 555322717a..24838dc4b0 100644 --- a/content/learning-paths/mobile-graphics-and-gaming/model-training-gym/1-introduction.md +++ b/content/learning-paths/mobile-graphics-and-gaming/model-training-gym/1-introduction.md @@ -1,34 +1,44 @@ --- -title: Install Model Gym and Explore Neural Graphics Examples +title: Install Model Gym and explore neural graphics examples weight: 2 ### FIXED, DO NOT MODIFY layout: learningpathall --- -## What is Neural Graphics? +## What is neural graphics? -Neural graphics is an intersection of graphics and machine learning. Rather than relying purely on traditional GPU pipelines, neural graphics integrates learned models directly into the rendering stack. The techniques are particularly powerful on mobile devices, where battery life and performance constraints limit traditional compute-heavy rendering approaches. The goal is to deliver high visual fidelity without increasing GPU cost. This is achieved by training and deploying compact neural networks optimized for the device's hardware. +Neural graphics is an intersection of graphics and machine learning. Rather than relying purely on traditional GPU pipelines, neural graphics integrates learned models directly into the rendering stack. These techniques are particularly powerful on mobile devices, where battery life and performance constraints limit traditional compute-heavy rendering approaches. Your goal is to deliver high visual fidelity without increasing GPU cost. You achieve this by training and deploying compact neural networks optimized for your device's hardware. ## How does Arm support neural graphics? -Arm enables neural graphics through the [**Neural Graphics Development Kit**](https://developer.arm.com/mobile-graphics-and-gaming/neural-graphics): a set of open-source tools that let developers train, evaluate, and deploy ML models for graphics workloads. + +Arm enables neural graphics through the [**Neural Graphics Development Kit**](https://developer.arm.com/mobile-graphics-and-gaming/neural-graphics): a set of open-source tools that let you train, evaluate, and deploy ML models for graphics workloads. + At its core are the ML Extensions for Vulkan, which bring native ML inference into the GPU pipeline using structured compute graphs. These extensions (`VK_ARM_tensors` and `VK_ARM_data_graph`) allow real-time upscaling and similar effects to run efficiently alongside rendering tasks. -The neural graphics models can be developed using well-known ML frameworks like PyTorch, and exported to deployment using Arm's hardware-aware pipeline. The workflow converts the model to `.vgf` via the TOSA intermediate representation, making it possible to do tailored model development for you game use-case. This Learning Path focuses on **Neural Super Sampling (NSS)** as the use case for training, evaluating, and deploying neural models using a toolkit called the [**Neural Graphics Model Gym**](https://github.com/arm/neural-graphics-model-gym). To learn more about NSS, you can check out the [resources on Hugging Face](https://huggingface.co/Arm/neural-super-sampling). Additonally, Arm has developed a set of Vulkan Samples to get started. Specifically, `.vgf` format is introduced in the `postprocessing_with_vgf` one. The Vulkan Samples and over-all developer resources for neural graphics is covered in the [introductory Learning Path](/learning-paths/mobile-graphics-and-gaming/vulkan-ml-sample). -Starting in 2026, Arm GPUs will feature dedicated neural accelerators, optimized for low-latency inference in graphics workloads. To help developers get started early, Arm provides the ML Emulation Layers for Vulkan that simulate future hardware behavior, so you can build and test models now. + +You can develop neural graphics models using well-known ML frameworks like PyTorch, then export them for deployment with Arm's hardware-aware pipeline. The workflow converts your model to `.vgf` using the TOSA intermediate representation, making it possible to tailor model development for your game use case. In this Learning Path, you will focus on **Neural Super Sampling (NSS)** as the primary example for training, evaluating, and deploying neural models using the [**Neural Graphics Model Gym**](https://github.com/arm/neural-graphics-model-gym). To learn more about NSS, see the [resources on Hugging Face](https://huggingface.co/Arm/neural-super-sampling). Arm has also developed a set of Vulkan Samples to help you get started. The `.vgf` format is introduced in the `postprocessing_with_vgf` sample. For a broader overview of neural graphics developer resources, including the Vulkan Samples, see the introductory Learning Path [Get started with neural graphics using ML Extensions for Vulkan](/learning-paths/mobile-graphics-and-gaming/vulkan-ml-sample/). + + + +Starting in 2026, Arm GPUs will feature dedicated neural accelerators, optimized for low-latency inference in graphics workloads. To help you get started early, Arm provides the ML Emulation Layers for Vulkan that simulate future hardware behavior, so you can build and test models now. ## What is the Neural Graphics Model Gym? + The Neural Graphics Model Gym is an open-source toolkit for fine-tuning and exporting neural graphics models. It is designed to streamline the entire model lifecycle for graphics-focused use cases, like NSS. -Model Gym gives you: +With Model Gym, you can: + +- Train and evaluate models using a PyTorch-based API +- Export models to `.vgf` using ExecuTorch for real-time use in game development +- Take advantage of quantization-aware training (QAT) and post-training quantization (PTQ) with ExecuTorch +- Use an optional Docker setup for reproducibility + +You can choose to work with Python notebooks for rapid experimentation or use the command-line interface for automation. This Learning Path will walk you through the demonstrative notebooks and prepare you to start using the CLI for your own model development. -- A training and evaluation API built on PyTorch -- Model export to .vgf using ExecuTorch for real-time use in game development -- Support for quantization-aware training (QAT) and post-training quantization (PTQ) using ExecuTorch -- Optional Docker setup for reproducibility -The toolkit supports workflows via both Python notebooks (for rapid experimentation) and command-line interface. This Learning Path will walk you through the demonstrative notebooks, and prepare you to start using the CLI for your own model development. +You're now ready to set up your environment and start working with neural graphics models. Keep going! diff --git a/content/learning-paths/mobile-graphics-and-gaming/model-training-gym/2-devenv.md b/content/learning-paths/mobile-graphics-and-gaming/model-training-gym/2-devenv.md index 013148bcaa..3e679dbc31 100644 --- a/content/learning-paths/mobile-graphics-and-gaming/model-training-gym/2-devenv.md +++ b/content/learning-paths/mobile-graphics-and-gaming/model-training-gym/2-devenv.md @@ -6,7 +6,11 @@ weight: 3 layout: learningpathall --- -In this section, you will install a few dependencies into your Ubuntu environment. You'll need a working Python 3.10+ environment with some ML and system dependencies. Make sure Python is installed by verifying that the version is >3.10: +## Overview + +In this section, you will install a few dependencies into your Ubuntu environment. You'll need a working Python 3.10+ environment with some ML and system dependencies. + +Start by making sure Python is installed by verifying that the version is >3.10: ```bash python3 --version @@ -34,10 +38,10 @@ From inside the `neural-graphics-model-gym-examples/` folder, run the setup scri ./setup.sh ``` -This will: -- create a Python virtual environment called `nb-env` -- install the `ng-model-gym` package and required dependencies -- download the datasets and weights needed to run the notebooks +This will do the following: +- Create a Python virtual environment called `nb-env` +- Install the `ng-model-gym` package and required dependencies +- Download the datasets and weights needed to run the notebooks Activate the virtual environment: @@ -55,4 +59,5 @@ print("Torch version:", torch.__version__) print("Model Gym version:", ng_model_gym.__version__) ``` -You’re now ready to start walking through the training and evaluation steps. +You’ve completed your environment setup - great work! You’re now ready to start walking through the training and evaluation steps. + diff --git a/content/learning-paths/mobile-graphics-and-gaming/model-training-gym/3-model-training.md b/content/learning-paths/mobile-graphics-and-gaming/model-training-gym/3-model-training.md index 37deaf987c..7714c0202b 100644 --- a/content/learning-paths/mobile-graphics-and-gaming/model-training-gym/3-model-training.md +++ b/content/learning-paths/mobile-graphics-and-gaming/model-training-gym/3-model-training.md @@ -5,19 +5,18 @@ weight: 4 ### FIXED, DO NOT MODIFY layout: learningpathall --- +## About NSS In this section, you'll get hands-on with how you can use the model gym to fine-tune the NSS use-case. -## About NSS - Arm Neural Super Sampling (NSS) is an upscaling technique designed to solve a growing challenge in real-time graphics: delivering high visual quality without compromising performance or battery life. Instead of rendering every pixel at full resolution, NSS uses a neural network to intelligently upscale frames, freeing up GPU resources and enabling smoother, more immersive experiences on mobile devices. -The NSS model is available in two formats: +The NSS model is available in two formats, as shown in the table below: | Model format | File extension | Used for | |--------------|----------------|--------------------------------------------------------------------------| -| PyTorch | .pt | training, fine-tuning, or evaluation in or scripts using the Model Gym | -| VGF | .vgf | for deployment using ML Extensions for Vulkan on Arm-based hardware or emulation layers | +| PyTorch | `.pt` | training, fine-tuning, or evaluation in or scripts using the Model Gym | +| VGF | `.vgf` | for deployment using ML Extensions for Vulkan on Arm-based hardware or emulation layers | Both formats are available in the [NSS repository on Hugging Face](https://huggingface.co/Arm/neural-super-sampling). You'll also be able to explore config files, model metadata, usage details and detailed documentation on the use-case. @@ -62,6 +61,8 @@ neural-graphics-model-gym-examples/tutorials/nss/model_evaluation_example.ipynb At the end you should see a visual comparison of the NSS upscaling and the ground truth image. -Proceed to the final section to view the model structure and explore further resources. + +You’ve completed the training and evaluation steps. Proceed to the final section to view the model structure and explore further resources. + diff --git a/content/learning-paths/mobile-graphics-and-gaming/model-training-gym/4-model-explorer.md b/content/learning-paths/mobile-graphics-and-gaming/model-training-gym/4-model-explorer.md index 7d3e720066..998c8e71a6 100644 --- a/content/learning-paths/mobile-graphics-and-gaming/model-training-gym/4-model-explorer.md +++ b/content/learning-paths/mobile-graphics-and-gaming/model-training-gym/4-model-explorer.md @@ -12,19 +12,21 @@ Model Explorer is a visualization tool for inspecting neural network structures This lets you inspect model architecture, tensor shapes, and graph connectivity before deployment. This can be a powerful way to debug and understand your exported neural graphics models. -## Setting up the VGF adapter +## Set up the VGF adapter The VGF adapter extends Model Explorer to support `.vgf` files exported from the Model Gym toolchain. -### Install the VGF adapter with pip +## Install the VGF adapter with pip + +Run: ```bash pip install vgf-adapter-model-explorer ``` -The source code is available on [GitHub](https://github.com/arm/vgf-adapter-model-explorer). +The VGF adapter model explorer source code is available on [GitHub](https://github.com/arm/vgf-adapter-model-explorer). -### Install Model Explorer +## Install Model Explorer The next step is to make sure the Model Explorer itself is installed. Use pip to set it up: @@ -32,7 +34,7 @@ The next step is to make sure the Model Explorer itself is installed. Use pip to pip install torch ai-edge-model-explorer ``` -### Launch the viewer +## Launch the viewer Once installed, launch the explorer with the VGF adapter: @@ -44,6 +46,4 @@ Use the file browser to open the `.vgf` model exported earlier in your training ## Wrapping up -Through this Learning Path, you’ve learned what neural graphics is and why it matters for game performance. You’ve stepped through the process of training and evaluating an NSS model using PyTorch and the Model Gym, and seen how to export that model into VGF (.vgf) for real-time deployment. You’ve also explored how to visualize and inspect the model’s structure using Model Explorer. - -As a next step, you can head over to the [Model Training Gym repository](https://github.com/arm/neural-graphics-model-gym/tree/main) documentation to explore integration into your own game development workflow. You’ll find resources on fine-tuning, deeper details about the training and export process, and everything you need to adapt to your own content and workflows. \ No newline at end of file +Through this Learning Path, you’ve learned what neural graphics is and why it matters for game performance. You’ve stepped through the process of training and evaluating an NSS model using PyTorch and the Model Gym, and seen how to export that model into VGF (.vgf) for real-time deployment. You’ve also explored how to visualize and inspect the model’s structure using Model Explorer. You can now explore the Model Training Gym repository for deeper integration and to keep building your skills. diff --git a/content/learning-paths/mobile-graphics-and-gaming/model-training-gym/_index.md b/content/learning-paths/mobile-graphics-and-gaming/model-training-gym/_index.md index 0c809ef4ef..e106b163bc 100644 --- a/content/learning-paths/mobile-graphics-and-gaming/model-training-gym/_index.md +++ b/content/learning-paths/mobile-graphics-and-gaming/model-training-gym/_index.md @@ -1,10 +1,6 @@ --- -title: Fine-Tuning Neural Graphics Models with Model Gym - -draft: true -cascade: - draft: true - +title: Fine-tuning neural graphics models with Model Gym + minutes_to_complete: 45 who_is_this_for: This is an advanced topic for developers exploring neural graphics and interested in training and deploying upscaling models like Neural Super Sampling (NSS) using PyTorch and Arm’s hardware-aware backend. @@ -50,10 +46,15 @@ further_reading: title: NSS on HuggingFace link: https://huggingface.co/Arm/neural-super-sampling type: website + - resource: + title: Vulkan ML Sample Learning Path + link: /learning-paths/mobile-graphics-and-gaming/vulkan-ml-sample/ + type: learningpath ### FIXED, DO NOT MODIFY -weight: 1 -layout: "learningpathall" -learning_path_main_page: "yes" +# ================================================================================ +weight: 1 # _index.md always has weight of 1 to order correctly +layout: "learningpathall" # All files under learning paths have this same wrapper +learning_path_main_page: "yes" # This should be surfaced when looking for related content. Only set for _index.md of learning path content. --- diff --git a/content/learning-paths/mobile-graphics-and-gaming/voice-assistant/1-prerequisites.md b/content/learning-paths/mobile-graphics-and-gaming/voice-assistant/1-prerequisites.md index c26f0e762b..865a90aa20 100644 --- a/content/learning-paths/mobile-graphics-and-gaming/voice-assistant/1-prerequisites.md +++ b/content/learning-paths/mobile-graphics-and-gaming/voice-assistant/1-prerequisites.md @@ -14,6 +14,7 @@ Begin by installing the latest version of [Android Studio](https://developer.and Next, install the following command-line tools: - `cmake`; a cross-platform build system. +- `python3`; interpreted programming language, used by project to fetch dependencies and models. - `git`; a version control system that you use to clone the Voice Assistant codebase. - `adb`; Android Debug Bridge, used to communicate with and control Android devices. @@ -22,9 +23,20 @@ Install these tools with the appropriate command for your OS: {{< tabpane code=true >}} {{< tab header="Linux/Ubuntu" language="bash">}} sudo apt update -sudo apt install git adb cmake -y +sudo apt install git adb cmake python3 -y {{< /tab >}} {{< tab header="macOS" language="bash">}} -brew install git android-platform-tools cmake +brew install git android-platform-tools cmake python + {{< /tab >}} +{{< /tabpane >}} + +Ensure the correct version of python is installed, the project needs python version 3.9 or later: + +{{< tabpane code=true >}} + {{< tab header="Linux/Ubuntu" language="bash">}} +python3 --version + {{< /tab >}} + {{< tab header="macOS" language="bash">}} +python3 --version {{< /tab >}} {{< /tabpane >}} diff --git a/content/learning-paths/mobile-graphics-and-gaming/voice-assistant/2-overview.md b/content/learning-paths/mobile-graphics-and-gaming/voice-assistant/2-overview.md index 22348d6cfc..c5b4056d13 100644 --- a/content/learning-paths/mobile-graphics-and-gaming/voice-assistant/2-overview.md +++ b/content/learning-paths/mobile-graphics-and-gaming/voice-assistant/2-overview.md @@ -33,6 +33,26 @@ This process includes the following stages: - A neural network analyzes these features to predict the most likely transcription based on grammar and context. - The recognized text is passed to the next stage of the pipeline. +The voice assistant pipeline imports and builds a separate module to provide this STT functionality. You can access this at: + +``` +https://gitlab.arm.com/kleidi/kleidi-examples/speech-to-text +``` + +You can build the pipeline for various platforms and independently benchmark the STT functionality: + +|Platform|Details| +|---|---| +|Linux|x86_64 - KleidiAI is disabled by default, aarch64 - KleidiAI is enabled by default.| +|Android|Cross-compile for an Android device, ensure the Android NDK path is set and correct toolchain file is provided. KleidiAI enabled by default.| +|macOS|Native or cross-compilation for a Mac device. KleidiAI and SME kernels can be used if available on device.| + +Currently, this module uses [whisper.cpp](https://github.com/ggml-org/whisper.cpp) and wraps the backend library with a thin C++ layer. The module also provides JNI bindings for developers targeting Android based applications. + +{{% notice %}} +You can get more information on how to build and use this module [here](https://gitlab.arm.com/kleidi/kleidi-examples/speech-to-text/-/blob/main/README.md?ref_type=heads) +{{% /notice %}} + ## Large Language Model Large Language Models (LLMs) enable natural language understanding and, in this application, are used for question-answering. @@ -41,8 +61,37 @@ The text transcription from the previous part of the pipeline is used as input t By default, the LLM runs asynchronously, streaming tokens as they are generated. The UI updates in real time with each token, which is also passed to the final pipeline stage. +The voice assistant pipeline imports and builds a separate module to provide this LLM functionality. You can access this at: + +``` +https://gitlab.arm.com/kleidi/kleidi-examples/large-language-models +``` + +You can build this pipeline for various platforms and independently benchmark the LLM functionality: + +|Platform|Details| +|---|---| +|Linux|x86_64 - KleidiAI is disabled by default, aarch64 - KleidiAI is enabled by default.| +|Android|Cross-compile for an Android device, ensure the Android NDK path is set and correct toolchain file is provided. KleidiAI enabled by default.| +|macOS|Native or cross-compilation for a Mac device. KleidiAI and SME kernels can be used if available on device.| + +Currently, this module provides a thin C++ layer as well as JNI bindings for developers targeting Android based applications, supported backends are: +|Framework|Dependency|Input modalities supported|Output modalities supported|Neural Network| +|---|---|---|---|---| +|llama.cpp|https://github.com/ggml-org/llama.cpp|`image`, `text`|`text`|phi-2,Qwen2-VL-2B-Instruct| +|onnxruntime-genai|https://github.com/microsoft/onnxruntime-genai|`text`|`text`|phi-4-mini-instruct-onnx| +|mediapipe|https://github.com/google-ai-edge/mediapipe|`text`|`text`|gemma-2b-it-cpu-int4| + + + +{{% notice %}} +You can get more information on how to build and use this module [here](https://gitlab.arm.com/kleidi/kleidi-examples/large-language-models/-/blob/main/README.md?ref_type=heads) +{{% /notice %}} + ## Text-to-Speech This part of the application pipeline uses the Android Text-to-Speech API along with additional logic to produce smooth, natural speech. In synchronous mode, speech playback begins only after the full LLM response is received. By default, the application operates in asynchronous mode, where speech synthesis starts as soon as a full or partial sentence is ready. Remaining tokens are buffered and processed by the Android Text-to-Speech engine to ensure uninterrupted playback. + +You are now familiar with the building blocks of this application and can build these independently for various platforms. You can now build the multimodal Voice Assistant example which runs on Android OS in the next step. diff --git a/content/learning-paths/mobile-graphics-and-gaming/voice-assistant/4-run.md b/content/learning-paths/mobile-graphics-and-gaming/voice-assistant/4-run.md index 21ef93cb95..6deafb4cee 100644 --- a/content/learning-paths/mobile-graphics-and-gaming/voice-assistant/4-run.md +++ b/content/learning-paths/mobile-graphics-and-gaming/voice-assistant/4-run.md @@ -16,33 +16,51 @@ By default, Android devices ship with developer mode disabled. To enable it, fol Once developer mode is enabled, connect your phone to your computer with USB. It should appear as a running device in the top toolbar. Select the device and click **Run** (a small green triangle, as shown below). This transfers the app to your phone and launches it. +In the graphic below, a Google Pixel 8 Pro phone is connected to the USB cable: -In the graphic below, a Samsung Galaxy Z Flip 6 phone is connected to the USB cable: ![upload image alt-text#center](upload.png "Upload the Voice App") -======= + ## Launch the Voice Assistant The app starts with this welcome screen: -![welcome image alt-text#center](voice_assistant_view1.jpg "Welcome Screen") +![welcome image alt-text#center](voice_assistant_view1.png "Welcome Screen") Tap **Press to talk** at the bottom of the screen to begin speaking your request. ## Voice Assistant controls -### View performance counters +You can use application controls to enable extra functionality or gather performance data. -You can toggle performance counters such as: -- Speech recognition time. -- LLM encode tokens per second. -- LLM decode tokens per second. -- Speech generation time. +|Button|Control name|Description| +|---|---|---| +|1|Performance counters|Performance counters are hidden by default, click this to show speech recognition time, LLM encode and decode rate.| +|2|Speech generation|Speech generation is disabled by default, click this to use Android Text-to-Speech and get audible answers.| +|3|Reset conversation|By default, the application keeps context so you can follow-up questions, click this to reset voice assistant conversation history.| Click the icon circled in red in the top left corner to show or hide these metrics: -![performance image alt-text#center](voice_assistant_view2.jpg "Performance Counters") +![performance image alt-text#center](voice_assistant_view2.png "Performance Counters") + +### Multimodal Question Answering + +If you have built the application using the default `llama.cpp` backend, you can also use it in multimodal `(input + text)` question answering mode. + +For this, click the image button first: + +![use image alt-text#center](voice_assistant_multimodal_1.png "Add image button") + +This will bring up the photos you can chose from: + +![choose image alt-text#center](choose_image.png "Choose image from the gallery") + +Choose the image, and add image for voice assistant: + +![add image alt-text#center](add_image.png "Add image to the question") + +You can now ask questions related to this image, the large language model will you the image and text for multimodal question answering. -To reset the Voice Assistant's conversation history, click the icon circled in red in the top right: +![ask question image alt-text#center](voice_assistant_multimodal_2.png "Add image to the question") -![reset image alt-text#center](voice_assistant_view3.jpg "Reset the Voice Assistant's Context") +Now that you have explored how the android application is set up and built, you can see in detail how KleidiAI library is used in the next step. diff --git a/content/learning-paths/mobile-graphics-and-gaming/voice-assistant/5-kleidiai.md b/content/learning-paths/mobile-graphics-and-gaming/voice-assistant/5-kleidiai.md index 31fd09ea46..fc3a5363b1 100644 --- a/content/learning-paths/mobile-graphics-and-gaming/voice-assistant/5-kleidiai.md +++ b/content/learning-paths/mobile-graphics-and-gaming/voice-assistant/5-kleidiai.md @@ -31,4 +31,5 @@ To disable KleidiAI during build: KleidiAI simplifies development by abstracting away low-level optimization: developers can write high-level code while the KleidiAI library selects the most efficient implementation at runtime based on the target hardware. This is possible thanks to its deeply optimized micro-kernels tailored for Arm architectures. -As newer versions of the architecture become available, KleidiAI becomes even more powerful: simply updating the library allows applications like the Voice Assistant to take advantage of the latest architectural improvements - such as SME2 — without requiring any code changes. This means better performance on newer devices with no additional effort from developers. \ No newline at end of file +As newer versions of the architecture become available, KleidiAI becomes even more powerful: simply updating the library allows applications like the multimodal Voice Assistant to take advantage of the latest architectural improvements such as SME2, without requiring any code changes. This means better performance on newer devices with no additional effort from developers. + diff --git a/content/learning-paths/mobile-graphics-and-gaming/voice-assistant/_index.md b/content/learning-paths/mobile-graphics-and-gaming/voice-assistant/_index.md index 1fb425143e..9443da7334 100644 --- a/content/learning-paths/mobile-graphics-and-gaming/voice-assistant/_index.md +++ b/content/learning-paths/mobile-graphics-and-gaming/voice-assistant/_index.md @@ -1,19 +1,23 @@ --- -title: Accelerate Voice Assistant performance with KleidiAI and SME2 +title: Accelerate multimodal Voice Assistant performance with KleidiAI and SME2 minutes_to_complete: 30 -who_is_this_for: This is an introductory topic for developers who want to accelerate Voice Assistant performance on Android devices using KleidiAI and SME2. +who_is_this_for: This is an introductory topic for developers who want to implement a multimodal pipeline for a Voice Assistant application and accelerate the performance on Android devices using KleidiAI and SME2. learning_objectives: - - Compile and run a Voice Assistant Android application. - - Optimize performance using KleidiAI and SME2. + - Learn about the multimodal Voice Assistant pipeline and different components used. + - Learn about the functionality of ML components used and how these can be built and benchmarked on various platforms. + - Compile and run a multimodal Voice Assistant example based on Android OS. + - Optimize performance of multimodal Voice Assistant using KleidiAI and SME2. prerequisites: - - An Android phone that supports the i8mm Arm architecture feature (8-bit integer matrix multiplication). This Learning Path was tested on a Samsung Galaxy Z Flip 6. + - An Android phone that supports the i8mm Arm architecture feature (8-bit integer matrix multiplication). This Learning Path was tested on a Google Pixel 8 Pro. - A development machine with [Android Studio](https://developer.android.com/studio) installed. -author: Arnaud de Grandmaison +author: + - Arnaud de Grandmaison + - Nina Drozd skilllevels: Introductory subjects: Performance and Architecture @@ -22,10 +26,11 @@ armips: tools_software_languages: - Java - Kotlin + - C++ operatingsystems: + - Android - Linux - macOS - - Android further_reading: diff --git a/content/learning-paths/mobile-graphics-and-gaming/voice-assistant/add_image.png b/content/learning-paths/mobile-graphics-and-gaming/voice-assistant/add_image.png new file mode 100644 index 0000000000..b9db5a2421 Binary files /dev/null and b/content/learning-paths/mobile-graphics-and-gaming/voice-assistant/add_image.png differ diff --git a/content/learning-paths/mobile-graphics-and-gaming/voice-assistant/choose_image.png b/content/learning-paths/mobile-graphics-and-gaming/voice-assistant/choose_image.png new file mode 100644 index 0000000000..26dd58ff93 Binary files /dev/null and b/content/learning-paths/mobile-graphics-and-gaming/voice-assistant/choose_image.png differ diff --git a/content/learning-paths/mobile-graphics-and-gaming/voice-assistant/upload.png b/content/learning-paths/mobile-graphics-and-gaming/voice-assistant/upload.png index 30d7a4e478..9768c1577b 100644 Binary files a/content/learning-paths/mobile-graphics-and-gaming/voice-assistant/upload.png and b/content/learning-paths/mobile-graphics-and-gaming/voice-assistant/upload.png differ diff --git a/content/learning-paths/mobile-graphics-and-gaming/voice-assistant/upload_old.png b/content/learning-paths/mobile-graphics-and-gaming/voice-assistant/upload_old.png new file mode 100644 index 0000000000..30d7a4e478 Binary files /dev/null and b/content/learning-paths/mobile-graphics-and-gaming/voice-assistant/upload_old.png differ diff --git a/content/learning-paths/mobile-graphics-and-gaming/voice-assistant/voice_assistant_multimodal_1.png b/content/learning-paths/mobile-graphics-and-gaming/voice-assistant/voice_assistant_multimodal_1.png new file mode 100644 index 0000000000..f00927c744 Binary files /dev/null and b/content/learning-paths/mobile-graphics-and-gaming/voice-assistant/voice_assistant_multimodal_1.png differ diff --git a/content/learning-paths/mobile-graphics-and-gaming/voice-assistant/voice_assistant_multimodal_2.png b/content/learning-paths/mobile-graphics-and-gaming/voice-assistant/voice_assistant_multimodal_2.png new file mode 100644 index 0000000000..6d2bb5f367 Binary files /dev/null and b/content/learning-paths/mobile-graphics-and-gaming/voice-assistant/voice_assistant_multimodal_2.png differ diff --git a/content/learning-paths/mobile-graphics-and-gaming/voice-assistant/voice_assistant_use_multimodal_1.png b/content/learning-paths/mobile-graphics-and-gaming/voice-assistant/voice_assistant_use_multimodal_1.png new file mode 100644 index 0000000000..dc75319530 Binary files /dev/null and b/content/learning-paths/mobile-graphics-and-gaming/voice-assistant/voice_assistant_use_multimodal_1.png differ diff --git a/content/learning-paths/mobile-graphics-and-gaming/voice-assistant/voice_assistant_use_multimodal_2.png b/content/learning-paths/mobile-graphics-and-gaming/voice-assistant/voice_assistant_use_multimodal_2.png new file mode 100644 index 0000000000..d7fee1b46a Binary files /dev/null and b/content/learning-paths/mobile-graphics-and-gaming/voice-assistant/voice_assistant_use_multimodal_2.png differ diff --git a/content/learning-paths/mobile-graphics-and-gaming/voice-assistant/voice_assistant_view1.png b/content/learning-paths/mobile-graphics-and-gaming/voice-assistant/voice_assistant_view1.png new file mode 100644 index 0000000000..59fbceb399 Binary files /dev/null and b/content/learning-paths/mobile-graphics-and-gaming/voice-assistant/voice_assistant_view1.png differ diff --git a/content/learning-paths/mobile-graphics-and-gaming/voice-assistant/voice_assistant_view1.jpg b/content/learning-paths/mobile-graphics-and-gaming/voice-assistant/voice_assistant_view1_old.jpg similarity index 100% rename from content/learning-paths/mobile-graphics-and-gaming/voice-assistant/voice_assistant_view1.jpg rename to content/learning-paths/mobile-graphics-and-gaming/voice-assistant/voice_assistant_view1_old.jpg diff --git a/content/learning-paths/mobile-graphics-and-gaming/voice-assistant/voice_assistant_view2.jpg b/content/learning-paths/mobile-graphics-and-gaming/voice-assistant/voice_assistant_view2.jpg deleted file mode 100644 index cd46a52085..0000000000 Binary files a/content/learning-paths/mobile-graphics-and-gaming/voice-assistant/voice_assistant_view2.jpg and /dev/null differ diff --git a/content/learning-paths/mobile-graphics-and-gaming/voice-assistant/voice_assistant_view2.png b/content/learning-paths/mobile-graphics-and-gaming/voice-assistant/voice_assistant_view2.png new file mode 100644 index 0000000000..50a479bc68 Binary files /dev/null and b/content/learning-paths/mobile-graphics-and-gaming/voice-assistant/voice_assistant_view2.png differ diff --git a/content/learning-paths/mobile-graphics-and-gaming/voice-assistant/voice_assistant_view3.jpg b/content/learning-paths/mobile-graphics-and-gaming/voice-assistant/voice_assistant_view3.jpg deleted file mode 100644 index 427cfe0ca8..0000000000 Binary files a/content/learning-paths/mobile-graphics-and-gaming/voice-assistant/voice_assistant_view3.jpg and /dev/null differ diff --git a/content/learning-paths/servers-and-cloud-computing/_index.md b/content/learning-paths/servers-and-cloud-computing/_index.md index d40cbc760a..372f4f691c 100644 --- a/content/learning-paths/servers-and-cloud-computing/_index.md +++ b/content/learning-paths/servers-and-cloud-computing/_index.md @@ -8,7 +8,7 @@ key_ip: maintopic: true operatingsystems_filter: - Android: 3 -- Linux: 180 +- Linux: 183 - macOS: 13 - Windows: 14 pinned_modules: @@ -18,14 +18,14 @@ pinned_modules: - providers - migration subjects_filter: -- CI-CD: 7 +- CI-CD: 8 - Containers and Virtualization: 32 - Databases: 18 - Libraries: 9 - ML: 32 - Performance and Architecture: 72 - Storage: 2 -- Web: 12 +- Web: 14 subtitle: Optimize cloud native apps on Arm for performance and cost title: Servers and Cloud Computing tools_software_languages_filter: @@ -36,6 +36,7 @@ tools_software_languages_filter: - AI: 1 - Android Studio: 1 - Ansible: 2 +- apache: 1 - Apache Spark: 2 - Apache Tomcat: 2 - ApacheBench: 1 @@ -50,6 +51,7 @@ tools_software_languages_filter: - ASP.NET Core: 2 - Assembly: 5 - async-profiler: 1 +- Autocannon: 1 - AWS: 2 - AWS CDK: 2 - AWS Cloud Formation: 1 @@ -66,6 +68,7 @@ tools_software_languages_filter: - Bastion: 3 - BOLT: 2 - bpftool: 1 +- Buildkite: 1 - C: 10 - C#: 2 - C++: 12 @@ -80,7 +83,8 @@ tools_software_languages_filter: - Daytona: 1 - Demo: 3 - Django: 1 -- Docker: 23 +- Docker: 24 +- Docker Buildx: 1 - Envoy: 3 - ExecuTorch: 1 - FAISS: 1 @@ -121,7 +125,6 @@ tools_software_languages_filter: - KEDA: 1 - Kedify: 1 - Keras: 1 -- KleidiAI: 1 - Kubernetes: 11 - Libamath: 1 - libbpf: 1 @@ -145,7 +148,8 @@ tools_software_languages_filter: - Networking: 1 - Nexmark: 1 - NGINX: 4 -- Node.js: 3 +- Node.js: 4 +- npm: 1 - Ollama: 1 - ONNX Runtime: 2 - OpenBLAS: 1 @@ -156,6 +160,8 @@ tools_software_languages_filter: - PAPI: 1 - perf: 6 - Perf: 1 +- PHP: 1 +- PHPBench: 1 - PostgreSQL: 4 - Profiling: 1 - Python: 32 @@ -205,7 +211,7 @@ tools_software_languages_filter: weight: 1 cloud_service_providers_filter: - AWS: 17 -- Google Cloud: 18 +- Google Cloud: 21 - Microsoft Azure: 18 - Oracle: 2 --- diff --git a/content/learning-paths/servers-and-cloud-computing/buildkite-gcp/_index.md b/content/learning-paths/servers-and-cloud-computing/buildkite-gcp/_index.md new file mode 100644 index 0000000000..dad926e671 --- /dev/null +++ b/content/learning-paths/servers-and-cloud-computing/buildkite-gcp/_index.md @@ -0,0 +1,65 @@ +--- +title: Build multi-architecture Docker images with Buildkite on Google Axion + +draft: true +cascade: + draft: true + +minutes_to_complete: 40 + +who_is_this_for: This is an introductory guide for software developers learning to build and run multi-architecture Docker images with Buildkite on Arm-based Google Cloud C4A virtual machines powered by Google Axion processors. + +learning_objectives: +- Provision an Arm-based virtual machine on Google Cloud running SUSE Linux Enterprise Server or Ubuntu +- Install and configure Docker, Docker Buildx, and the Buildkite agent +- Write a Dockerfile to containerize a simple Flask-based Python application +- Configure a Buildkite pipeline to build a multi-architecture Docker image and push it to Docker Hub +- Run the application to ensure it works as expected + +prerequisites: + - A [Google Cloud Platform (GCP)](https://cloud.google.com/free?utm_source=google&hl=en) account with billing enabled + - Basic knowledge of Linux system administration such as creating users, installing packages, and managing services + - Familiarity with Docker and container concepts + - A GitHub account to host your application repository + +author: Jason Andrews + +##### Tags +skilllevels: Introductory +subjects: CI-CD +cloud_service_providers: Google Cloud + +armips: + - Neoverse + +tools_software_languages: + - Buildkite + - Docker + - Docker Buildx + +operatingsystems: + - Linux + +# ================================================================================ +# FIXED, DO NOT MODIFY +# ================================================================================ +further_reading: + - resource: + title: Google Cloud documentation + link: https://cloud.google.com/docs + type: documentation + + - resource: + title: Buildkite documentation + link: https://buildkite.com/docs + type: documentation + + - resource: + title: Docker documentation + link: https://docs.docker.com/ + type: documentation + +weight: 1 +layout: "learningpathall" +learning_path_main_page: "yes" +--- diff --git a/content/learning-paths/servers-and-cloud-computing/buildkite-gcp/_next-steps.md b/content/learning-paths/servers-and-cloud-computing/buildkite-gcp/_next-steps.md new file mode 100644 index 0000000000..c3db0de5a2 --- /dev/null +++ b/content/learning-paths/servers-and-cloud-computing/buildkite-gcp/_next-steps.md @@ -0,0 +1,8 @@ +--- +# ================================================================================ +# FIXED, DO NOT MODIFY THIS FILE +# ================================================================================ +weight: 21 # Set to always be larger than the content in this path to be at the end of the navigation. +title: "Next Steps" # Always the same, html page title. +layout: "learningpathall" # All files under learning paths have this same wrapper for Hugo processing. +--- diff --git a/content/learning-paths/servers-and-cloud-computing/buildkite-gcp/background.md b/content/learning-paths/servers-and-cloud-computing/buildkite-gcp/background.md new file mode 100644 index 0000000000..7d11024ac7 --- /dev/null +++ b/content/learning-paths/servers-and-cloud-computing/buildkite-gcp/background.md @@ -0,0 +1,20 @@ +--- +title: Get started with Buildkite on Google Axion C4A instances + +weight: 2 +layout: "learningpathall" +--- + +## Google Axion C4A instances on Google Cloud + +Google Axion C4A is a family of Arm-based virtual machines powered by Google's custom Axion CPU, which is built on Arm Neoverse V2 cores. Designed for high performance and energy efficiency, these VMs are well-suited for modern cloud workloads such as CI/CD pipelines, microservices, media processing, and general-purpose applications. + +The C4A series can provide a cost-efficient alternative to x86 VMs while leveraging the scalability and performance characteristics of the Arm architecture on Google Cloud. + +To learn more about Google Axion, see the blog [Introducing Google Axion Processors, our new Arm-based CPUs](https://cloud.google.com/blog/products/compute/introducing-googles-new-arm-based-cpu). + +## Buildkite for CI/CD on Arm + +Buildkite is a flexible and scalable continuous integration and delivery (CI/CD) platform that allows you to run pipelines on your own infrastructure. By deploying Buildkite agents on Google Axion C4A VMs, you can take advantage of Arm's performance and cost efficiency to build, test, and deploy applications, including multi-architecture Docker images. + +Buildkite integrates seamlessly with version control systems such as GitHub and supports advanced workflows for cloud migration, multi-arch builds, and testing microservices. Learn more in the [Buildkite documentation](https://buildkite.com/docs). diff --git a/content/learning-paths/servers-and-cloud-computing/buildkite-gcp/buildkite-agent-setup.md b/content/learning-paths/servers-and-cloud-computing/buildkite-gcp/buildkite-agent-setup.md new file mode 100644 index 0000000000..8fcd583e82 --- /dev/null +++ b/content/learning-paths/servers-and-cloud-computing/buildkite-gcp/buildkite-agent-setup.md @@ -0,0 +1,96 @@ +--- +title: Set-up Buildkite +weight: 5 + +### FIXED, DO NOT MODIFY +layout: learningpathall +--- + +# Set up a Buildkite agent + +After installing the Buildkite agent binary on a Google Axion C4A Arm VM, you can set up and configure a Buildkite agent and queue. + +## 1. Create an Agent Token + +Before configuring the agent, you need an agent token from your Buildkite organization. + +1. Log in to your Buildkite account (you can login with GitHub), and go to your Organization Settings +2. Click Agents in the left menu +3. Click Create Agent Token +4. Enter a name for your token, such as `buildkite-arm` +5. Click Create Token +6. Copy the token immediately, you won’t be able to see it again after leaving the page. + +![Buildkite Dashboard alt-text#center](images/agent-token.png "Create Buildkite agent Token") + + +## 2. Configure Buildkite Agent + +Create the configuration directory and file on your local system with the commands below: + +```console +sudo tee /root/.buildkite-agent/buildkite-agent.cfg > /dev/null <}} + {{< tab header="Ubuntu" language="bash">}} +sudo apt update +sudo apt install unzip -y + {{< /tab >}} + {{< tab header="SUSE Linux" language="bash">}} +sudo zypper refresh +sudo zypper install -y curl unzip + {{< /tab >}} +{{< /tabpane >}} + +### Download and Install Buildkite Agent + +```console +sudo bash -c "$(curl -sL https://raw.githubusercontent.com/buildkite/agent/main/install.sh)" +``` + +This one-line command downloads and runs the Buildkite installer. + +The installer performs the following steps: + +- Download the latest version of the agent, for example `v3.109.1` +- Install it into the home directory of the root user at `/root/.buildkite-agent` +- Create a default config file (`buildkite-agent.cfg`) where you’ll later add your agent token + +```output + + _ _ _ _ _ _ _ _ + | | (_) | | | | (_) | | | + | |__ _ _ _| | __| | | ___| |_ ___ __ _ __ _ ___ _ __ | |_ + | '_ \| | | | | |/ _` | |/ / | __/ _ \ / _` |/ _` |/ _ \ '_ \| __| + | |_) | |_| | | | (_| | <| | || __/ | (_| | (_| | __/ | | | |_ + |_.__/ \__,_|_|_|\__,_|_|\_\_|\__\___| \__,_|\__, |\___|_| |_|\__| + __/ | + |___/ +Finding latest release... +Installing Version: v3.109.1 +Destination: /root/.buildkite-agent +Downloading https://github.com/buildkite/agent/releases/download/v3.109.1/buildkite-agent-linux-arm64-3.109.1.tar.gz + +A default buildkite-agent.cfg has been created for you in /root/.buildkite-agent + +Don't forget to update the config with your agent token! You can find it token on your "Agents" page in Buildkite + +Successfully installed to /root/.buildkite-agent + +You can now start the agent! + + /root/.buildkite-agent/bin/buildkite-agent start + +For docs, help and support: + + https://buildkite.com/docs/agent/v3 + +Happy building! <3 +``` + +### Verify installation +This command checks the version of the Buildkite agent and confirms it is installed successfully. + +```console +sudo /root/.buildkite-agent/bin/buildkite-agent --version +``` +You should see output similar to: + +```output +buildkite-agent version 3.109.1+10971.5c28e309805a3d748068a3ff7f5c531f51f6f495 +``` + +{{% notice Note %}} +The Buildkite Agent version 3.43.0 introduces Linux/Arm64 Docker image for the Buildkite Agent, making deployment and installation easier for Linux users on Arm. You can view the [release note](https://github.com/buildkite/agent/releases/tag/v3.43.0). + +The [Arm Ecosystem Dashboard](https://developer.arm.com/ecosystem-dashboard/) recommends Buildkite Agent version v3.43.0 as the minimum recommended on the Arm platforms. +{{% /notice %}} + +### Install Docker and Docker Buildx + +Buildkite will use Docker to build and push images. + +First, refresh the package repositories and install the required packages including git, Python3-pip, and Docker: + +Next, enable and start the Docker service to ensure it runs automatically when your VM starts: + +{{< tabpane code=true >}} + {{< tab header="Ubuntu" language="bash">}} +sudo apt update +sudo apt install python-is-python3 python3-pip -y +curl -fsSL get.docker.com -o get-docker.sh && sh get-docker.sh +sudo usermod -aG docker $USER ; newgrp docker + {{< /tab >}} + {{< tab header="SUSE Linux" language="bash">}} +sudo zypper install -y git python3 python3-pip docker +sudo usermod -aG docker $USER ; newgrp docker + {{< /tab >}} +{{< /tabpane >}} + + +SUSE Linux requires some extra steps to start Docker, you can skip this for Ubuntu: + +```console +sudo systemctl enable docker +sudo systemctl start docker +``` + +Verify the Docker installation by checking the version and running a test container: + +```console +docker run hello-world +``` + +You will see the Docker message: + +```output +Hello from Docker! +This message shows that your installation appears to be working correctly. + +To generate this message, Docker took the following steps: + 1. The Docker client contacted the Docker daemon. + 2. The Docker daemon pulled the "hello-world" image from the Docker Hub. + (arm64v8) + 3. The Docker daemon created a new container from that image which runs the + executable that produces the output you are currently reading. + 4. The Docker daemon streamed that output to the Docker client, which sent it + to your terminal. + +To try something more ambitious, you can run an Ubuntu container with: + $ docker run -it ubuntu bash + +Share images, automate workflows, and more with a free Docker ID: + https://hub.docker.com/ + +For more examples and ideas, visit: + https://docs.docker.com/get-started/ +``` + +## Install Docker Buildx + +Docker Buildx is a plugin that allows building multi-architecture images, for example `arm64` and `amd64`. + +For SUSE Linux, you need to install Docker Buildx. This is not necessary on Ubuntu. + +Download the binary and move it to the Docker CLI plugin directory: + +```console +wget https://github.com/docker/buildx/releases/download/v0.26.1/buildx-v0.26.1.linux-arm64 +chmod +x buildx-v0.26.1.linux-arm64 +sudo mkdir -p /usr/libexec/docker/cli-plugins +sudo mv buildx-v0.26.1.linux-arm64 /usr/libexec/docker/cli-plugins/docker-buildx +``` + +Now that the Buildkite installation is complete, you can set up the Buildkite agent. diff --git a/content/learning-paths/servers-and-cloud-computing/buildkite-gcp/instance.md b/content/learning-paths/servers-and-cloud-computing/buildkite-gcp/instance.md new file mode 100644 index 0000000000..dad269729e --- /dev/null +++ b/content/learning-paths/servers-and-cloud-computing/buildkite-gcp/instance.md @@ -0,0 +1,30 @@ +--- +title: Create a Google Axion C4A Arm virtual machine on GCP +weight: 3 + +### FIXED, DO NOT MODIFY +layout: learningpathall +--- + +## Overview + +In this section, you'll learn how to provision a Google Axion C4A Arm virtual machine on Google Cloud Platform (GCP) using the `c4a-standard-4` instance type with 4 vCPUs and 16 GB memory in the Google Cloud Console. + +## Provision a Google Axion C4A VM in the Google Cloud Console + +To create a virtual machine based on the C4A instance type: + +1. Navigate to the [Google Cloud Console](https://console.cloud.google.com/). +2. Go to Compute Engine > VM Instances and select **Create Instance**. +3. Under **Machine configuration**: + - Populate fields such as **Instance name**, **Region**, and **Zone**. + - Set **Series** to **C4A**. + - Select **c4a-standard-4** for machine type. + + ![Create a Google Axion C4A Arm virtual machine in the Google Cloud Console with c4a-standard-4 selected alt-text#center](images/gcp-vm.png "Creating a Google Axion C4A Arm virtual machine in Google Cloud Console") + +4. Under **OS and Storage**, select **Change**, then choose an Arm64-based OS image. For this Learning Path, use **SUSE Linux Enterprise Server** or **Ubuntu**. Select your preferred version for the operating system. Ensure you choose the Arm version, then select **Select**. +5. Under **Networking**, enable **Allow HTTP traffic**. +6. Select **Create** to launch the instance. + +Once the instance is running, connect using SSH \ No newline at end of file diff --git a/content/learning-paths/servers-and-cloud-computing/buildkite-gcp/multiarch_buildkite_pipeline.md b/content/learning-paths/servers-and-cloud-computing/buildkite-gcp/multiarch_buildkite_pipeline.md new file mode 100644 index 0000000000..ebd93c42ac --- /dev/null +++ b/content/learning-paths/servers-and-cloud-computing/buildkite-gcp/multiarch_buildkite_pipeline.md @@ -0,0 +1,108 @@ +--- +title: Create a Flask app and set up the Buildkite pipeline +weight: 6 + +### FIXED, DO NOT MODIFY +layout: learningpathall +--- + +## Create an application + +You can now create an application to containerize with Docker. The example below is a simple Flask-based Python application. + +Create a new public GitHub repository where you can create the Dockerfile and the Python file for the application. + +### Create a Dockerfile + +In a GitHub repo, add a new file named `Dockerfile` with this content: + +```dockerfile +FROM python:3.12-slim + +WORKDIR /app + +COPY app.py . + +RUN pip install flask + +EXPOSE 5000 + +CMD ["python", "app.py"] +``` + +### Create a Python application + +In the same repo, add a Python source file named `app.py`: + +```python +from flask import Flask +app = Flask(__name__) + +@app.route('/') +def hello(): + return "Hello from Arm-based Buildkite runner!" + +if __name__ == "__main__": + app.run(host="0.0.0.0", port=5000) +``` + +This Python code defines a simple Flask web server that listens on all interfaces (0.0.0.0) at port 5000 and responds with "Hello from Arm-based Buildkite runner!" when the root URL (/) is accessed. + +### Add the code to your GitHub repository + +Before triggering the pipeline, your GitHub repository should have: + +- `Dockerfile` (defines your multi-arch image) +- `app.py` (your Python microservice) + +You will need the path to the GitHub repository when you create a Buildkite pipeline below. + +### Add Docker credentials as Buildkite secrets + +Make sure to add your Docker credentials as secrets in the Buildkite UI. + +Navigate to Buildkite and select Agents and then Secrets and add `DOCKER_USERNAME` and `DOCKER_PASSWORD`. + +![Buildkite Dashboard alt-text#center](images/secrets.png "Set Secrets") + +### Create a Buildkite pipeline for multi-arch builds + +In Buildkite, define your pipeline using YAML through the UI. + +Go to the Buildkite Dashboard and select Pipelines and New Pipeline + +Fill out the form: + + - Git Repository: Enter your GitHub repository URL (SSH or HTTPS). + - Name: Enter a name for your pipeline. + +![Buildkite Dashboard alt-text#center](images/pipeline.png "Create Pipeline") + +In the Steps (YAML Steps) section, paste your pipeline YAML. + +```yaml +steps: + - label: "Build and Push Multiarch App" + env: + DOCKER_CONFIG: "~/.docker" + commands: + - echo "Testing env hook..." + - env | grep DOCKER + - ~/.buildkite-agent/bin/buildkite-agent secret get "DOCKER_PASSWORD" | docker login -u "$(~/.buildkite-agent/bin/buildkite-agent secret get "DOCKER_USERNAME")" --password-stdin + - docker buildx rm mybuilder || true + - docker buildx create --use --name mybuilder + - docker buildx inspect --bootstrap + - docker buildx build --platform linux/amd64,linux/arm64 -t "$(~/.buildkite-agent/bin/buildkite-agent secret get "DOCKER_USERNAME")/multi-arch-app:latest" --push . + agents: + queue: buildkite-queue1 +``` + +![Buildkite Dashboard alt-text#center](images/yaml.png "YAML steps") + +Click Create Pipeline. + +Trigger a new build by clicking New Build on your pipeline’s dashboard. + +![Buildkite Dashboard alt-text#center](images/build-p.png "Create Build") + +Once your files and pipeline are ready, you can validate that your Buildkite agent is running and ready to execute jobs. diff --git a/content/learning-paths/servers-and-cloud-computing/buildkite-gcp/validation.md b/content/learning-paths/servers-and-cloud-computing/buildkite-gcp/validation.md new file mode 100644 index 0000000000..10886be473 --- /dev/null +++ b/content/learning-paths/servers-and-cloud-computing/buildkite-gcp/validation.md @@ -0,0 +1,66 @@ +--- +title: Run the Buildkite pipeline +weight: 7 + +### FIXED, DO NOT MODIFY +layout: learningpathall +--- + +## Run the Buildkite pipeline for multi-arch builds + +Follow the steps below to run your pipeline on an Arm-based Buildkite agent. You will use Docker Buildx to create a multi-architecture image for both `arm64` and `amd64`. + +### Ensure the agent is running + +Before your pipeline can execute, the Buildkite agent must be running and connected to your Buildkite account. To verify the agent status, run the following command on your VM: + +```console +sudo /root/.buildkite-agent/bin/buildkite-agent status +``` + +This command checks the current state of your Buildkite agent and displays its connection status. When the agent is properly running and connected, you'll see logs indicating "Registered agent" in the output, confirming that the agent is online and ready to receive jobs from Buildkite. The agent continuously listens for new pipeline jobs and executes the steps you've defined in your configuration. + +### Trigger the pipeline + +To start your pipeline, navigate to your pipeline in the Buildkite web interface. From your Buildkite dashboard, select the pipeline you created and click the "New Build" button. Choose the branch you want to build from the dropdown menu, then click "Start Build" to begin execution. + +![Buildkite Dashboard alt-text#center](images/build-p.png "Trigger the pipeline") + +When you trigger the pipeline, Buildkite sends the job to your Arm-based agent and begins executing the steps defined in your YAML configuration file. The agent will process each step in sequence, starting with Docker login, followed by creating the Buildx builder, and finally building and pushing your multi-architecture Docker image. + +### Monitor the Build + +You can see the logs of your build live in the Buildkite UI. + +The steps include: +- Docker login +- Buildx builder creation +- Multi-arch Docker image build and push + +![Buildkite Dashboard alt-text#center](images/log.png "Monitor the build") + +### Verify Multi-Arch Image + +After the pipeline completes successfully, you can go to Docker Hub and verify the pushed multi-arch images: + +![Docker-Hub alt-text#center](images/multi-arch-image.png "Figure 3: Docker image") + +### Run the Flask Application on Arm + +```console +docker pull /multi-arch-app:latest +docker run --rm -p 80:5000 /multi-arch-app:latest +``` + +This command runs the Flask application inside a container, exposing it on port 5000 inside the container and mapping it to port 80 on the host machine. + +You can now visit the VM’s Public IP to access the Flask application. + +```console +http:// +``` +You should see output similar to: + +![Buildkite Dashboard alt-text#center](images/browser.png "Figure 4: Verify Docker Images") + +Your pipeline is working, and you have successfully built and ran the Flask application using your Arm-based Buildkite agent. diff --git a/content/learning-paths/servers-and-cloud-computing/mysql-azure/_index.md b/content/learning-paths/servers-and-cloud-computing/mysql-azure/_index.md index 236fdb3299..74bcfe5563 100644 --- a/content/learning-paths/servers-and-cloud-computing/mysql-azure/_index.md +++ b/content/learning-paths/servers-and-cloud-computing/mysql-azure/_index.md @@ -1,18 +1,15 @@ --- title: Deploy MySQL on Microsoft Azure Cobalt 100 processors -draft: true -cascade: - draft: true - + minutes_to_complete: 30 -who_is_this_for: This is an introductory topic that introduces MySQL deployment on Microsoft Azure Cobalt 100 (Arm-based) virtual machines. It is designed for developers migrating MySQL applications from x86_64 to Arm. +who_is_this_for: This is an introductory topic for developers migrating MySQL applications from x86_64 to Arm. learning_objectives: - - Provision an Azure Arm64 virtual machine using Azure console, with Ubuntu Pro 24.04 LTS as the base image. - - Deploy MySQL on the Ubuntu virtual machine. - - Perform MySQL baseline testing and benchmarking on Arm64 virtual machines. + - Provision an Azure Arm64 virtual machine using Azure console, with Ubuntu Pro 24.04 LTS as the base image + - Deploy MySQL on the Ubuntu virtual machine + - Perform MySQL baseline testing and benchmarking on Arm64 virtual machines prerequisites: - A [Microsoft Azure](https://azure.microsoft.com/) account with access to Cobalt 100 based instances (Dpsv6) diff --git a/content/learning-paths/servers-and-cloud-computing/mysql-azure/background.md b/content/learning-paths/servers-and-cloud-computing/mysql-azure/background.md index c36ac02dea..2fb3579b91 100644 --- a/content/learning-paths/servers-and-cloud-computing/mysql-azure/background.md +++ b/content/learning-paths/servers-and-cloud-computing/mysql-azure/background.md @@ -8,9 +8,7 @@ layout: "learningpathall" ## Cobalt 100 Arm-based processor -Azure Cobalt 100 is Microsoft’s first-generation Arm-based processor, designed for cloud-native, scale-out Linux workloads. Based on Arm’s Neoverse-N2 architecture, this 64-bit CPU delivers improved performance and energy efficiency. Running at 3.4 GHz, it provides a dedicated physical core for each vCPU, ensuring consistent and predictable performance. - -Typical workloads include web and application servers, data analytics, open-source databases, and caching systems. +Azure Cobalt 100 is Microsoft’s first-generation Arm-based processor, designed for cloud-native, scale-out Linux workloads. Based on Arm’s Neoverse-N2 architecture, it is a 64-bit CPU that delivers improved performance and energy efficiency. Running at 3.4 GHz, it provides a dedicated physical core for each vCPU, ensuring consistent and predictable performance. Typical workloads include web and application servers, data analytics, open-source databases, and caching systems. To learn more, see the Microsoft blog [Announcing the preview of new Azure virtual machines based on the Azure Cobalt 100 processor](https://techcommunity.microsoft.com/blog/azurecompute/announcing-the-preview-of-new-azure-vms-based-on-the-azure-cobalt-100-processor/4146353). @@ -18,7 +16,7 @@ To learn more, see the Microsoft blog [Announcing the preview of new Azure virtu MySQL is an open-source relational database management system (RDBMS) widely used for storing, organizing, and managing structured data. It uses SQL (Structured Query Language) for querying and managing databases, making it one of the most popular choices for web applications, enterprise solutions, and cloud deployments. -It is known for its reliability, high performance, and ease of use. MySQL supports features like transactions, replication, partitioning, and robust security, making it suitable for both small applications and large-scale production systems. +You can use MySQL for reliable, high-performance data management. Take advantage of features such as transactions, replication, partitioning, and robust security to support both small applications and large-scale production systems. + +Learn more at the [MySQL official website](https://www.mysql.com/) and in the [MySQL documentation](https://dev.mysql.com/doc/). -Learn more at the [MySQL official website](https://www.mysql.com/) and in the [official documentation](https://dev.mysql.com/doc/) -. diff --git a/content/learning-paths/servers-and-cloud-computing/mysql-azure/baseline.md b/content/learning-paths/servers-and-cloud-computing/mysql-azure/baseline.md index 7d160aea17..ced523c9b6 100644 --- a/content/learning-paths/servers-and-cloud-computing/mysql-azure/baseline.md +++ b/content/learning-paths/servers-and-cloud-computing/mysql-azure/baseline.md @@ -1,33 +1,42 @@ --- -title: Validate MySQL +title: Validate MySQL functionality on Azure Arm64 weight: 6 ### FIXED, DO NOT MODIFY layout: learningpathall --- -After installing MySQL on your Azure Cobalt 100 Arm64 virtual machine, you should run a functional test to confirm that the database is operational and ready for use. Beyond just checking service status, validation ensures MySQL is processing queries correctly, users can authenticate, and the environment is correctly configured for cloud workloads. +## The benefits of validation -### Start MySQL +After installing MySQL on your Azure Cobalt 100 Arm64 VM, run a functional test to confirm that the database is operational and ready for use. Beyond checking service status, validation ensures the following: -Ensure MySQL is running and configured to start on boot: +- MySQL is processing queries correctly +- Users can authenticate +- The environment is correctly configured for cloud workloads + +## Start MySQL + +Ensure MySQL is running and configured to start on boot by running the following: ```console sudo systemctl start mysql sudo systemctl enable mysql ``` -### Connect to MySQL +You’ve now validated that MySQL is running correctly and can store, retrieve, and organize data on your Azure Cobalt 100 Arm64 virtual machine. This confirms your environment is ready for development or production workloads. + +## Connect to MySQL Connect using the MySQL client: ```console mysql -u admin -p ``` -This opens the MySQL client and connects as the new user(admin), prompting you to enter the admin password. +This opens the MySQL client and connects as the new user (admin), prompting you to enter the admin password. -### Show and Use Database +## Show and use a database Once you’ve connected successfully with your new user, the next step is to create and interact with a database. This verifies that your MySQL instance is not only accessible but also capable of storing and organizing data. + Run the following commands inside the MySQL shell: ```sql @@ -37,10 +46,10 @@ USE baseline_test; SELECT DATABASE(); ``` -- `CREATE DATABASE baseline_test;` - Creates a new database named baseline_test. -- `SHOW DATABASES;` - Lists all available databases. -- `USE baseline_test;` - Switches to the new database. -- `SELECT DATABASE();` - Confirms the current database in use. +- `CREATE DATABASE baseline_test;` - creates a new database named baseline_test +- `SHOW DATABASES;` - lists all available databases +- `USE baseline_test;` - switches to the new database +- `SELECT DATABASE();` - confirms the current database in use You should see output similar to: @@ -74,7 +83,7 @@ mysql> SELECT DATABASE(); ``` You created a new database named `baseline_test`, verified its presence with `SHOW DATABASES`, and confirmed it is the active database using `SELECT DATABASE()`. -### Create and show Table +## Create and show a table After creating and selecting a database, the next step is to define a table, which represents how your data will be structured. In MySQL, tables are the core storage objects where data is inserted, queried, and updated. Run the following inside the `baseline_test` database: @@ -85,13 +94,14 @@ CREATE TABLE test_table ( name VARCHAR(50), value INT ); +SHOW TABLES; ``` -- `CREATE TABLE` - Defines a new table named test_table. - - `id` - Primary key with auto-increment. - - `name` - String field up to 50 characters. - - `value` - Integer field. -- `SHOW TABLES;` - Lists all tables in the current database. +- `CREATE TABLE` - defines a new table named test_table + - `id` - primary key with auto-increment + - `name` - string field up to 50 characters + - `value` - integer field +- `SHOW TABLES;` - lists all tables in the current database You should see output similar to: @@ -108,7 +118,7 @@ mysql> SHOW TABLES; ``` You successfully created the table `test_table` in the `baseline_test` database and verified its existence using `SHOW TABLES`. -### Insert Sample Data +## Insert sample data Once the table is created, you can populate it with sample rows. This validates that MySQL can handle write operations and that the underlying storage engine is working properly. @@ -120,16 +130,16 @@ VALUES ('Bob', 200), ('Charlie', 300); ``` -- `INSERT INTO test_table (name, value)` - Specifies which table and columns to insert into. -- `VALUES` - Provides three rows of data. +- `INSERT INTO test_table (name, value)` - specifies which table and columns to insert into +- `VALUES` - provides three rows of data After inserting data into `test_table`, you can confirm the write operation succeeded by retrieving the rows with: ```sql SELECT * FROM test_table; ``` -- `SELECT *` - Retrieves all columns. -- `FROM test_table` - Selects from the test_table. +- `SELECT *` - retrieves all columns +- `FROM test_table` - selects from the test_table You should see output similar to: @@ -144,6 +154,6 @@ mysql> SELECT * FROM test_table; +----+---------+-------+ 3 rows in set (0.00 sec) ``` -This confirms that that rows were successfully inserted, the auto-increment primary key (id) is working correctly and the query engine can read back from disk/memory and return results instantly. +This confirms that rows were successfully inserted, the auto-increment primary key (id) is working correctly and the query engine can read data from disk or memory and return results instantly. The functional test was successful. The test_table contains the expected three rows (Alice, Bob, and Charlie) with their respective values. This confirms that MySQL is working correctly on your Cobalt 100 Arm-based VM, completing the installation and validation phase. diff --git a/content/learning-paths/servers-and-cloud-computing/mysql-azure/benchmarking.md b/content/learning-paths/servers-and-cloud-computing/mysql-azure/benchmarking.md index c9d5c5c949..2b940682d0 100644 --- a/content/learning-paths/servers-and-cloud-computing/mysql-azure/benchmarking.md +++ b/content/learning-paths/servers-and-cloud-computing/mysql-azure/benchmarking.md @@ -6,15 +6,17 @@ weight: 7 layout: learningpathall --- -## Benchmark MySQL on Azure Cobalt 100 Arm-based instances +## The benefits of benchmarking with mysqlslap -To understand how MySQL performs on Azure Cobalt 100 (Arm64) VMs, you can use the built-in `mysqlslap` tool. +Use the built-in `mysqlslap` tool to understand how MySQL performs on Azure Cobalt 100 (Arm64) VMs. `mysqlslap` is the official MySQL benchmarking tool used to simulate multiple client connections and measure query performance. It helps evaluate read/write throughput, query response times, and overall MySQL server performance under different workloads, making it ideal for baseline testing and optimization. -## Steps for MySQL Benchmarking with mysqlslap +## Steps for MySQL benchmarking with mysqlslap -1. Connect to MySQL and Create a Database +Set up up MySQL benchmarking with these steps. + +## Connect to MySQL and create a database Before running `mysqlslap`, you will create a dedicated test database so that benchmarking doesn’t interfere with your application data. This ensures clean test results and avoids accidental modifications to production schemas. Connect to MySQL using the admin user: @@ -29,7 +31,7 @@ CREATE DATABASE benchmark_db; USE benchmark_db; ``` -2. Create a Table and Populate Data +## Create a table and populate with data With a dedicated `benchmark_db` created, the next step is to define a test table and populate it with data. This simulates a realistic workload so that `mysqlslap` can measure query performance against non-trivial datasets. @@ -42,18 +44,19 @@ CREATE TABLE benchmark_table ( score INT ); ``` -Insert Sample Rows Manually: +## Insert sample rows for validation + +To quickly verify that inserts work correctly and test small queries, run the following command inside the MySQL shell: -For quick validation: ```sql INSERT INTO benchmark_table (username,score) VALUES ('John',100),('Jane',200),('Mike',300); ``` This verifies that inserts work correctly and allows you to test small queries. -Populate Automatically with 1000 Rows +## Populate table with 1000 rows automatically -For benchmarking, larger datasets give more meaningful results. You can use a stored procedure to generate rows programmatically: +For benchmarking, larger datasets give more meaningful results. Use a stored procedure to generate rows programmatically: ```sql DELIMITER // @@ -73,19 +76,26 @@ DROP PROCEDURE populate_benchmark_data; ``` At this stage, you have a populated `benchmark_table` inside `benchmark_db`. This provides a realistic dataset for running `mysqlslap`, enabling you to measure how MySQL performs on Azure Cobalt 100 under read-heavy, write-heavy, or mixed workloads. -## Run a Simple Read/Write Benchmark +## Run a simple read/write benchmark With the `benchmark_table` populated, you can run a synthetic workload using mysqlslap to simulate multiple clients performing inserts or queries at the same time. This tests how well MySQL handles concurrent connections and query execution. +Use the following command: + ```console mysqlslap --user=admin --password=`MyStrongPassword!` --host=127.0.0.1 --concurrency=10 --iterations=5 --query="INSERT INTO benchmark_db.benchmark_table (username,score) VALUES('TestUser',123);" --create-schema=benchmark_db ``` -- **--user / --password:** MySQL login credentials. -- **--host:** MySQL server address (127.0.0.1 for local). -- **--concurrency:** Number of simultaneous clients (here, 10). -- **--iterations:** How many times to repeat the test (here, 5). -- **--query:** The SQL statement to run repeatedly. -- **--create-schema:** The database in which to run the query. + +The table below provides descriptions of the options used: + +| Option | Description | Example Value | +|-------------------|------------------------------------------------------------------|------------------------------| +| `--user` / `--password` | MySQL login credentials | `admin` / `MyStrongPassword!`| +| `--host` | MySQL server address (use `127.0.0.1` for local) | `127.0.0.1` | +| `--concurrency` | Number of simultaneous clients | `10` | +| `--iterations` | Number of times to repeat the test | `5` | +| `--query` | SQL statement to run repeatedly | `INSERT ...` or `SELECT ...` | +| `--create-schema` | Database in which to run the query | `benchmark_db` | You should see output similar to: @@ -98,9 +108,9 @@ Benchmark Average number of queries per client: 1 ``` -Run a Read Benchmark (table scan): +## Run a read benchmark (table scan): -You can now run a test that simulates multiple clients querying the table at the same time and records the results: +Run the following command to simulate multiple clients querying the table concurrently and record the results: ```console mysqlslap --user=admin --password="MyStrongPassword!" --host=127.0.0.1 --concurrency=10 --iterations=5 --query="SELECT * FROM benchmark_db.benchmark_table WHERE record_id < 500;" --create-schema=benchmark_db --verbose | tee -a /tmp/mysqlslap_benchmark.log @@ -117,16 +127,20 @@ Benchmark Average number of queries per client: 1 ``` -## Benchmark Results Table Explained: +## Understanding the benchmark results + +The following table lists the benchmark metrics with accompanying definitions. - Average number of seconds to run all queries: This is the average time it took for all the queries in one iteration to complete across all clients. It gives you a quick sense of overall performance. - Minimum number of seconds to run all queries: This is the fastest time any iteration of queries took. - Maximum number of seconds to run all queries: This is the slowest time any iteration of queries took. The closer this is to the average, the more consistent your performance is. - Number of clients running queries: Indicates how many simulated users (or connections) ran queries simultaneously during the test. - Average number of queries per client: Shows the average number of queries each client executed in the benchmark iteration. + | Metric | Description | + |-------------------------------------------|------------------------------------------------------------------------------------------------------------------------------------| + | **Average number of seconds to run all queries** | The average time it took for all the queries in one iteration to complete across all clients. It gives you a quick sense of overall performance. | + | **Minimum number of seconds to run all queries** | The fastest time any iteration of queries took. | + | **Maximum number of seconds to run all queries** | The slowest time any iteration of queries took. The closer this is to the average, the more consistent your performance is. | + | **Number of clients running queries** | Indicates how many simulated users (or connections) ran queries simultaneously during the test. | + | **Average number of queries per client** | Shows the average number of queries each client executed in the benchmark iteration | -## Benchmark summary on Arm64: -Here is a summary of benchmark results collected on an Arm64 **D4ps_v6 Ubuntu Pro 24.04 LTS virtual machine**. +## Benchmark summary on Arm64 +Here is a summary of benchmark results collected on an Arm64 D4ps_v6 Ubuntu Pro 24.04 LTS virtual machine. | Query Type | Average Time (s) | Minimum Time (s) | Maximum Time (s) | Clients | Avg Queries per Client | |------------|-----------------|-----------------|-----------------|--------|----------------------| @@ -134,12 +148,12 @@ Here is a summary of benchmark results collected on an Arm64 **D4ps_v6 Ubuntu Pr | SELECT | 0.263 | 0.261 | 0.264 | 10 | 1 | -## Insights from Benchmark Results +## Insights from the benchmark results + +The benchmark results on the Azure Cobalt 100 Arm64 VM show: -The benchmark results on the Arm64 virtual machine show: +- Both `INSERT` and `SELECT` queries performed consistently, with average times of 0.267s and 0.263s respectively. +- The difference between minimum and maximum times was very small for both query types, showing stable and predictable behavior under load. +- With 10 clients and an average of 1 query per client, the system handled concurrent operations efficiently without significant delays. - Balanced Performance for Read and Write Queries: Both `INSERT` and `SELECT` queries performed consistently, with average times of 0.267s and 0.263s, respectively. - Low Variability Across Iterations: The difference between the minimum and maximum times was very small for both query types, indicating stable and predictable behavior under load. - Moderate Workload Handling: With 10 clients and an average of 1 query per client, the system handled concurrent operations efficiently without significant delays. - -This demonstrates that the MySQL setup on Arm64 provides reliable and steady performance for both data insertion and retrieval tasks, making it a solid choice for applications requiring dependable database operations. +MySQL on Azure Cobalt 100 Arm64 VMs provides reliable and steady performance for both data insertion and retrieval tasks, making it suitable for applications requiring dependable database operations. \ No newline at end of file diff --git a/content/learning-paths/servers-and-cloud-computing/mysql-azure/create-instance.md b/content/learning-paths/servers-and-cloud-computing/mysql-azure/create-instance.md index 83bee32945..72bf48f4aa 100644 --- a/content/learning-paths/servers-and-cloud-computing/mysql-azure/create-instance.md +++ b/content/learning-paths/servers-and-cloud-computing/mysql-azure/create-instance.md @@ -20,37 +20,47 @@ This Learning Path focuses on general-purpose virtual machines in the Dpsv6 seri While the steps to create this instance are included here for convenience, you can also refer to the [Deploy a Cobalt 100 virtual machine on Azure Learning Path](/learning-paths/servers-and-cloud-computing/cobalt/). -#### Create an Arm-based Azure Virtual Machine +## Create an Arm-based Azure virtual machine -Creating a virtual machine based on Azure Cobalt 100 is no different from creating any other virtual machine in Azure. To create an Azure virtual machine, launch the Azure portal and navigate to "Virtual Machines". -1. Select "Create", and click on "Virtual Machine" from the drop-down list. -2. Inside the "Basic" tab, fill in the Instance details such as "Virtual machine name" and "Region". -3. Choose the image for your virtual machine (for example, Ubuntu Pro 24.04 LTS) and select “Arm64” as the VM architecture. -4. In the “Size” field, click on “See all sizes” and select the D-Series v6 family of virtual machines. Select “D4ps_v6” from the list. +Creating a virtual machine based on Azure Cobalt 100 is no different to creating any other virtual machine in Azure. Follow the steps below to create an Azure virtual machine: -![Azure portal VM creation — Azure Cobalt 100 Arm64 virtual machine (D4ps_v6) alt-text#center](images/instance.png "Figure 1: Select the D-Series v6 family of virtual machines") +- Launch the Azure portal and navigate to **Virtual Machines**. +- Select **Create**, and select **Virtual Machine** from the drop-down list. +- Inside the **Basic** tab, fill in the instance details such as **Virtual machine name** and **Region**. +- Select the image for your virtual machine (for example, Ubuntu Pro 24.04 LTS) and select **Arm64** as the VM architecture. +- In the **Size** field, select **See all sizes** and select the D-Series v6 family of virtual machines. +- Select **D4ps_v6** from the list as shown in the diagram below: -5. Select "SSH public key" as an Authentication type. Azure will automatically generate an SSH key pair for you and allow you to store it for future use. It is a fast, simple, and secure way to connect to your virtual machine. -6. Fill in the Administrator username for your VM. -7. Select "Generate new key pair", and select "RSA SSH Format" as the SSH Key Type. RSA could offer better security with keys longer than 3072 bits. Give a Key pair name to your SSH key. -8. In the "Inbound port rules", select HTTP (80) and SSH (22) as the inbound ports. +![Azure portal VM creation — Azure Cobalt 100 Arm64 virtual machine (D4ps_v6) alt-text#center](images/instance.png "Select the D-Series v6 family of virtual machines") -![Azure portal VM creation — Azure Cobalt 100 Arm64 virtual machine (D4ps_v6) alt-text#center](images/instance1.png "Figure 2: Allow inbound port rules") +- For **Authentication type**, select **SSH public key**. {{% notice Note %}} +Azure generates an SSH key pair for you and lets you save it for future use. This method is fast, secure, and easy for connecting to your virtual machine. +{{% /notice %}} +- Fill in the **Administrator username** for your VM. +- Select **Generate new key pair**, and select **RSA SSH Format** as the SSH Key Type. {{% notice Note %}} +RSA offers better security with keys longer than 3072 bits. +{{% /notice %}} +- Give your SSH key a key pair name. +- In the **Inbound port rules**, select **HTTP (80)** and **SSH (22)** as the inbound ports, as shown below: + +![Azure portal VM creation — Azure Cobalt 100 Arm64 virtual machine (D4ps_v6) alt-text#center](images/instance1.png "Allow inbound port rules") -9. Click on the "Review + Create" tab and review the configuration for your virtual machine. It should look like the following: +- Now select the **Review + Create** tab and review the configuration for your virtual machine. It should look like the following: -![Azure portal VM creation — Azure Cobalt 100 Arm64 virtual machine (D4ps_v6) alt-text#center](images/ubuntu-pro.png "Figure 3: Review and Create an Azure Cobalt 100 Arm64 VM") +![Azure portal VM creation — Azure Cobalt 100 Arm64 virtual machine (D4ps_v6) alt-text#center](images/ubuntu-pro.png "Review and create an Azure Cobalt 100 Arm64 VM") -10. Finally, when you are confident about your selection, click on the "Create" button, and click on the "Download Private key and Create Resources" button. +- When you are happy with your selection, select the **Create** button and then **Download Private key and Create Resource** button. -![Azure portal VM creation — Azure Cobalt 100 Arm64 virtual machine (D4ps_v6) alt-text#center](images/instance4.png "Figure 4: Download Private key and Create Resources") +![Azure portal VM creation — Azure Cobalt 100 Arm64 virtual machine (D4ps_v6) alt-text#center](images/instance4.png "Download private key and create resource") -11. Your virtual machine should be ready and running within no time. You can SSH into the virtual machine using the private key, along with the Public IP details. +Your virtual machine should be ready and running in a few minutes. You can SSH into the virtual machine using the private key, along with the public IP details. -![Azure portal VM creation — Azure Cobalt 100 Arm64 virtual machine (D4ps_v6) alt-text#center](images/final-vm.png "Figure 5: VM deployment confirmation in Azure portal") +![Azure portal VM creation — Azure Cobalt 100 Arm64 virtual machine (D4ps_v6) alt-text#center](images/final-vm.png "VM deployment confirmation in Azure portal") {{% notice Note %}} -To learn more about Arm-based virtual machine in Azure, refer to “Getting Started with Microsoft Azure” in [Get started with Arm-based cloud instances](/learning-paths/servers-and-cloud-computing/csp/azure). +To learn more about Arm-based virtual machine in Azure, see “Getting Started with Microsoft Azure” in [Get started with Arm-based cloud instances](/learning-paths/servers-and-cloud-computing/csp/azure). {{% /notice %}} + +Your Azure Cobalt 100 Arm64 virtual machine is now ready. Continue to the next step to install and configure MySQL. diff --git a/content/learning-paths/servers-and-cloud-computing/mysql-azure/deploy.md b/content/learning-paths/servers-and-cloud-computing/mysql-azure/deploy.md index 82236065c5..e02742dc36 100644 --- a/content/learning-paths/servers-and-cloud-computing/mysql-azure/deploy.md +++ b/content/learning-paths/servers-and-cloud-computing/mysql-azure/deploy.md @@ -1,53 +1,64 @@ --- -title: Install MySQL +title: Deploy MySQL on an Azure Arm64 virtual machine weight: 5 ### FIXED, DO NOT MODIFY layout: learningpathall --- -## Install MySQL on Azure Cobalt 100 +## Get started with MySQL on an Azure Arm64 virtual machine -This section demonstrates how to install and secure MySQL on an Azure Arm64 virtual machine. You will configure the database, set up security measures, and verify that the service is running properly, making the environment ready for development, testing, or production deployment. +This section demonstrates how to install and secure MySQL on an Azure Arm64 virtual machine. It shows you how to do the following: -## Install MySQL and Tools +- Configure the database +- Set up security measures +- Verify that the service is running properly -Before installing MySQL, it’s important to ensure your VM is updated so you have the latest Arm64-optimized libraries and security patches. Ubuntu and other modern Linux distributions maintain Arm-native MySQL packages, so installation is straightforward with the system package manager. +Follow these steps to ensure that the environment is ready for development, testing, or production deployment. -1. Update the system and install MySQL -Update your system's package lists to ensure you get the latest versions and then install the MySQL server using the package manager. +## Prepare and install MySQL and tools + +First, update your VM to ensure you have the latest Arm64-optimized libraries and security patches. Ubuntu and other modern Linux distributions maintain Arm-native MySQL packages, so installation is straightforward with the system package manager. + +## Update the system and install MySQL +Update your system's package lists to make sure you install the latest Arm64-optimized MySQL packages. Then, use the package manager to install the MySQL server: ```console sudo apt update sudo apt install -y mysql-server ``` -2. Secure MySQL installation +## Secure MySQL installation + +Once MySQL is installed, the default configuration works but leaves your database exposed to security risks. To safeguard your installation, use the `mysql_secure_installation` script. This interactive tool helps you: + +- Set a strong password for the root account +- Remove anonymous users +- Disable remote root login +- Remove test databases +- Reload privilege tables + +These steps strengthen your MySQL server and reduce common vulnerabilities. -Once MySQL is installed, the default configuration is functional but not secure. -You will lock down your database so only you can access it safely. This involves setting up a password and cleaning up unused accounts to make sure no one else can access your data. +To begin securing your MySQL installation, run the following command: ```console sudo mysql_secure_installation ``` -This interactive script walks you through several critical security steps. Follow the prompts: +The interactive script walks you through several critical security steps. After following these and securing your MySQL installation, the database is significantly harder to compromise. -- Set a strong password for root. -- Remove anonymous users. -- Disallow remote root login. -- Remove test databases. -- Reload privilege tables. +## Start and enable MySQL service +After installing and securing MySQL, the next step is to ensure that the MySQL server process (mysqld) is running. You should also configure it to start automatically whenever your VM boots. -After securing your MySQL installation, the database is significantly harder to compromise. - -3. Start and enable MySQL service -After installation and securing MySQL, the next step is to ensure that the MySQL server process (mysqld) is running and configured to start automatically whenever your VM boots. +Use the following command: ```console sudo systemctl start mysql sudo systemctl enable mysql ``` -Verify MySQL Status: +## Verify MySQL status + +Check the status of the MySQL service to confirm that it is running and enabled: ```console sudo systemctl status mysql @@ -68,9 +79,9 @@ mysql.service - MySQL Community Server ``` You should see `active (running)` in the output, which indicates that MySQL is up and running. -4. Verify MySQL version +## Verify MySQL version -You check also check the installed version of MySQL to confirm it’s set up correctly and is running. +You can also check the installed version of MySQL to confirm it’s set up correctly and is running. ```console mysql -V @@ -80,9 +91,9 @@ You should see output similar to: ```output mysql Ver 8.0.43-0ubuntu0.24.04.1 for Linux on aarch64 ((Ubuntu)) ``` -5. Access MySQL shell +## Access MySQL shell -After confirming that MySQL is running, the next step is to log in to the MySQL monitor (shell). This is the command-line interface used to interact with the database server for administrative tasks such as creating users, managing databases, and tuning configurations. +After confirming that MySQL is running, the next step is to log in to the MySQL monitor (shell). It is the command-line interface used to interact with the database server for administrative tasks such as creating users, managing databases, and tuning configurations. ``` sudo mysql @@ -104,18 +115,19 @@ Type 'help;' or '\h' for help. Type '\c' to clear the current input statement. mysql> ``` -The `mysql> prompt` indicates you are now in the MySQL interactive shell and can issue SQL commands. +The `mysql> prompt` indicates that you are now in the MySQL interactive shell and can issue SQL commands. + +## Create a new user -6. Create a new user +Using the root account for everyday database tasks isn't recommended because it exposes your system to unnecessary risks. Instead, create dedicated users with only the privileges they need for their roles. -While the root account gives you full control, it’s best practice to avoid using it for day-to-day database operations. Instead, you should create separate users with specific privileges. -Start by entering the MySQL shell: +To get started, access the MySQL shell: ```console sudo mysql ``` -Inside the shell, create a new user: +Inside the MySQL shell, create a new user: ```sql CREATE USER 'admin'@'localhost' IDENTIFIED BY 'MyStrongPassword!'; @@ -127,18 +139,16 @@ EXIT; Replace `MyStrongPassword!` with a strong password of your choice. `FLUSH PRIVILEGES;` Reloads the in-memory privilege tables from disk, applying changes immediately. -## Verify Access with New User +## Verify access with new user -Once you’ve created a new MySQL user, it’s critical to test login and confirm that the account works as expected. This ensures the account is properly configured and can authenticate against the MySQL server. +After creating a new MySQL user, test the login. This confirms that the account is configured correctly and can authenticate with the MySQL server. Run the following command ( for user `admin`): ```console mysql -u admin -p ``` -You will then be asked to enter the password you created in the previous step. - -You should see output similar to: +You will then be asked to enter the password you created in the previous step. You should see output similar to: ```output Enter password: @@ -156,4 +166,4 @@ Type 'help;' or '\h' for help. Type '\c' to clear the current input statement mysql> exit ``` -With this, the MySQL installation is complete. You can now proceed with baseline testing of MySQL in the next section. +The MySQL installation is complete. You can now proceed with baseline testing of MySQL in the next section. diff --git a/content/learning-paths/servers-and-cloud-computing/nlp-hugging-face/pytorch-nlp-hf.md b/content/learning-paths/servers-and-cloud-computing/nlp-hugging-face/pytorch-nlp-hf.md index 8de375e109..7284a8a732 100644 --- a/content/learning-paths/servers-and-cloud-computing/nlp-hugging-face/pytorch-nlp-hf.md +++ b/content/learning-paths/servers-and-cloud-computing/nlp-hugging-face/pytorch-nlp-hf.md @@ -96,6 +96,8 @@ The output from this script should look like: 3) positive 0.0477 ``` +You should see three lines, each with a rank, sentiment label (negative / neutral / positive), and confidence score. The first line is the model’s strongest guess. The three probabilities should sum to 1. In this example, the model is confident the sentiment is negative. + You have successfully performed sentiment analysis on the input text, all running on your Arm AArch64 CPU. You can change the input text in your example and re-run the classification example. Now that you have run the model, let's add the ability to profile the model execution. You can use the [PyTorch Profiler](https://pytorch.org/tutorials/recipes/recipes/profiler_recipe.html) to analyze the execution time on the CPU. Copy the contents shown below into a file named `sentiment-analysis-profile.py`: @@ -177,7 +179,7 @@ Self CPU time total: 51.903ms 2) neutral 0.2287 3) positive 0.0477 ``` -In addition to the classification output from the model, you can now see the execution time for the different operators. +In addition to the classification output from the model, you can now see the execution time for the different operators. The table shows how much time each operation takes on the CPU, both by itself and including any child operations. You can experiment with the [BFloat16 floating-point number format](/install-guides/pytorch#bfloat16-floating-point-number-format) and [Transparent huge pages](/install-guides/pytorch#transparent-huge-pages) settings with PyTorch and see how that impacts the performance of your model. diff --git a/content/learning-paths/servers-and-cloud-computing/node-js-gcp/_index.md b/content/learning-paths/servers-and-cloud-computing/node-js-gcp/_index.md new file mode 100644 index 0000000000..3c97d84ac8 --- /dev/null +++ b/content/learning-paths/servers-and-cloud-computing/node-js-gcp/_index.md @@ -0,0 +1,64 @@ +--- +title: Deploy Node.js on Google Cloud C4A (Arm-based Axion VMs) + +draft: true +cascade: + draft: true + +minutes_to_complete: 30 + +who_is_this_for: This is an introductory topic for software developers migrating Node.js workloads from x86_64 to Arm-based servers, specifically on Google Cloud C4A virtual machines built on Axion processors. + + +learning_objectives: + - Provision an Arm-based SUSE SLES virtual machine on Google Cloud (C4A with Axion processors) + - Install and configure Node.js on a SUSE Arm64 (C4A) instance + - Validate Node.js functionality with baseline HTTP server tests + - Benchmark Node.js performance using Autocannon on Arm64 (AArch64) architecture + + +prerequisites: + - A [Google Cloud Platform (GCP)](https://cloud.google.com/free) account with billing enabled + - Familiarity with networking concepts and [Node.js event-driven architecture](https://nodejs.org/en/docs/guides/event-loop-timers-and-nexttick) + +author: Pareena Verma + +##### Tags +skilllevels: Introductory +subjects: Web +cloud_service_providers: Google Cloud + +armips: + - Neoverse + +tools_software_languages: + - Node.js + - npm + - Autocannon + +operatingsystems: + - Linux + +# ================================================================================ +# FIXED, DO NOT MODIFY +# ================================================================================ +further_reading: + - resource: + title: Google Cloud documentation + link: https://cloud.google.com/docs + type: documentation + + - resource: + title: Node.js documentation + link: https://nodejs.org/en + type: documentation + + - resource: + title: Autocannon documentation + link: https://www.npmjs.com/package/autocannon/v/5.0.0 + type: documentation + +weight: 1 +layout: "learningpathall" +learning_path_main_page: "yes" +--- diff --git a/content/learning-paths/servers-and-cloud-computing/node-js-gcp/_next-steps.md b/content/learning-paths/servers-and-cloud-computing/node-js-gcp/_next-steps.md new file mode 100644 index 0000000000..c3db0de5a2 --- /dev/null +++ b/content/learning-paths/servers-and-cloud-computing/node-js-gcp/_next-steps.md @@ -0,0 +1,8 @@ +--- +# ================================================================================ +# FIXED, DO NOT MODIFY THIS FILE +# ================================================================================ +weight: 21 # Set to always be larger than the content in this path to be at the end of the navigation. +title: "Next Steps" # Always the same, html page title. +layout: "learningpathall" # All files under learning paths have this same wrapper for Hugo processing. +--- diff --git a/content/learning-paths/servers-and-cloud-computing/node-js-gcp/background.md b/content/learning-paths/servers-and-cloud-computing/node-js-gcp/background.md new file mode 100644 index 0000000000..715c359130 --- /dev/null +++ b/content/learning-paths/servers-and-cloud-computing/node-js-gcp/background.md @@ -0,0 +1,23 @@ +--- +title: Getting started with Node.js on Google Axion C4A (Arm Neoverse-V2) + +weight: 2 + +layout: "learningpathall" +--- + +## Google Axion C4A Arm instances in Google Cloud + +Google Axion C4A is a family of Arm-based virtual machines built on Google’s custom Axion CPU, which is based on Arm Neoverse-V2 cores. Designed for high-performance and energy-efficient computing, these virtual machines offer strong performance for modern cloud workloads such as CI/CD pipelines, microservices, media processing, and general-purpose applications. + +The C4A series provides a cost-effective alternative to x86 virtual machines while leveraging the scalability and performance benefits of the Arm architecture in Google Cloud. + +To learn more about Google Axion, refer to the [Introducing Google Axion Processors, our new Arm-based CPUs](https://cloud.google.com/blog/products/compute/introducing-googles-new-arm-based-cpu) blog. + +## Node.js + +Node.js is an open-source, cross-platform JavaScript runtime environment built on Chrome's V8 engine. + +It allows developers to build scalable server-side applications, APIs, and backend services using JavaScript. Node.js features an event-driven, non-blocking I/O model, making it highly efficient for handling concurrent connections. + +Node.js is widely used for web servers, real-time applications, microservices, and cloud-native backend services. Learn more from the [Node.js official website](https://nodejs.org/en) and its [official documentation](https://nodejs.org/docs/latest/api/). diff --git a/content/learning-paths/servers-and-cloud-computing/node-js-gcp/baseline.md b/content/learning-paths/servers-and-cloud-computing/node-js-gcp/baseline.md new file mode 100644 index 0000000000..ea830061ff --- /dev/null +++ b/content/learning-paths/servers-and-cloud-computing/node-js-gcp/baseline.md @@ -0,0 +1,87 @@ +--- +title: Node.js baseline testing on Google Axion C4A Arm Virtual machine +weight: 5 + +### FIXED, DO NOT MODIFY +layout: learningpathall +--- + + +Since Node.js has been successfully installed on your GCP C4A Arm virtual machine, please follow these steps to make sure that it is running. + +## Validate Node.js installation with a baseline test + +### 1. Run a Simple REPL Test +The Node.js REPL (Read-Eval-Print Loop) allows you to run JavaScript commands interactively. + +```console +node +``` +Inside the REPL, type: + +```console +console.log("Hello from Node.js"); +``` +You should see an output similar to: + +```output +Hello from Node.js +undefined +``` +This confirms that Node.js can execute JavaScript commands successfully. Please now press "Ctrl-D" to exit node. + +### 2. Test a Basic HTTP Server +You can now create a small HTTP server to validate that Node.js can handle web requests. + +Use a text editor to create a file named `app.js` with the code below: + +```javascript +const http = require('http'); + +const server = http.createServer((req, res) => { + res.writeHead(200, { 'Content-Type': 'text/plain' }); + res.end('Baseline test successful!\n'); +}); + +server.listen(80, '0.0.0.0', () => { + console.log('Server running at http://0.0.0.0:80/'); +}); +``` + - This server listens on port 80. + - Binding to 0.0.0.0 allows connections from any IP, not just localhost. + +Next, we run the HTTP server in the background via sudo: + +```console +export MY_NODE=`which node` +sudo ${MY_NODE} app.js & +``` +You should see an output similar to: + +```output +Server running at http://0.0.0.0:80/ +``` +#### Test Locally with Curl + +```console +curl http://localhost:80 +``` + +You should see an output similar to: + +```output +Baseline test successful! +``` + +#### Test from a Browser +Also, you can access it from the browser with your VM's public IP. Run the following command to print your VM’s public URL, then open it in a browser: + +```console +echo "http://$(curl -s ifconfig.me):80/" +``` + +You should see the following message in your browser, confirming that your Node.js HTTP server is running successfully: + +![Node.js Browser alt-text#center](images/node-browser.png) + +This verifies the basic functionality of the Node.js installation before proceeding to the benchmarking. diff --git a/content/learning-paths/servers-and-cloud-computing/node-js-gcp/benchmarking.md b/content/learning-paths/servers-and-cloud-computing/node-js-gcp/benchmarking.md new file mode 100644 index 0000000000..e0dbfc7d8b --- /dev/null +++ b/content/learning-paths/servers-and-cloud-computing/node-js-gcp/benchmarking.md @@ -0,0 +1,112 @@ +--- +title: Node.js Benchmarking +weight: 6 + +### FIXED, DO NOT MODIFY +layout: learningpathall +--- + +## Node.js Benchmarking by Autocannon + +After validating that Node.js is installed and your HTTP server is running, you can benchmark it using **Autocannon**. + +### Install Autocannon +**Autocannon** is a fast HTTP/1.1 benchmarking tool for Node.js, used to measure server throughput, latency, and request handling under concurrent load. + +```console +npm install -g autocannon +``` + +### Start Your Node.js HTTP Server + +If your sample HTTP server is not already running from the last section, you can start it by typing: +```console +export MY_NODE=`which node` +sudo ${MY_NODE} app.js & +``` + +Server should be listening on port 80 in the background: + +```output +Server running at http://0.0.0.0:80/ +``` + +### Run a Basic Benchmark (Local) + +```console +autocannon -c 100 -d 10 http://localhost:80 +``` +- `-c 100` → 100 concurrent connections +- `-d 10` → duration 10 seconds +- URL → endpoint to test + +You should see an output similar to: +```output +Running 10s test @ http://localhost:80 +100 connections + + +┌─────────┬──────┬──────┬───────┬──────┬─────────┬─────────┬───────┐ +│ Stat │ 2.5% │ 50% │ 97.5% │ 99% │ Avg │ Stdev │ Max │ +├─────────┼──────┼──────┼───────┼──────┼─────────┼─────────┼───────┤ +│ Latency │ 1 ms │ 1 ms │ 2 ms │ 2 ms │ 1.06 ms │ 0.41 ms │ 28 ms │ +└─────────┴──────┴──────┴───────┴──────┴─────────┴─────────┴───────┘ +┌───────────┬─────────┬─────────┬─────────┬────────┬───────────┬──────────┬─────────┐ +│ Stat │ 1% │ 2.5% │ 50% │ 97.5% │ Avg │ Stdev │ Min │ +├───────────┼─────────┼─────────┼─────────┼────────┼───────────┼──────────┼─────────┤ +│ Req/Sec │ 66,175 │ 66,175 │ 70,847 │ 72,191 │ 70,713.61 │ 1,616.86 │ 66,134 │ +├───────────┼─────────┼─────────┼─────────┼────────┼───────────┼──────────┼─────────┤ +│ Bytes/Sec │ 12.8 MB │ 12.8 MB │ 13.7 MB │ 14 MB │ 13.7 MB │ 313 kB │ 12.8 MB │ +└───────────┴─────────┴─────────┴─────────┴────────┴───────────┴──────────┴─────────┘ + +Req/Bytes counts sampled once per second. +# of samples: 10 + +707k requests in 10.02s, 137 MB read +``` + +### Understanding Node.js benchmark metrics and results with Autocannon + +- **Avg (Average Latency)** → The mean time it took for requests to get a response. +- **Stdev (Standard Deviation)** → How much individual request times vary around the average. Smaller numbers mean more consistent response times. +- **Min (Minimum Latency)** → The fastest request observed during the test. + +### Benchmark summary on x86_64 +To compare the benchmark results, the following results were collected by running the same benchmark on a `x86 - c4-standard-4` (4 vCPUs, 15 GB Memory) x86_64 VM in GCP, running SUSE: + +Latency (ms): + +| Metric | 2.5% | 50% (Median) | 97.5% | 99% | Avg | Stdev | Max | +|----------|------|--------------|-------|-----|--------|--------|-------| +| Latency | 0 | 1 | 2 | 2 | 0.73 | 0.87 | 104 | + +Throughput: + +| Metric | 1% | 2.5% | 50% | 97.5% | Avg | Stdev | Min | +|------------|--------|--------|---------|---------|----------|-----------|---------| +| Req/Sec | 70,143 | 70,143 | 84,479 | 93,887 | 84,128 | 7,547.18 | 70,095 | +| Bytes/Sec | 13.6 MB| 13.6 MB| 16.4 MB | 18.2 MB | 16.3 MB | 1.47 MB | 13.6 MB| + +### Benchmark summary on Arm64 +Results from the earlier run on the `c4a-standard-4` (4 vCPU, 16 GB memory) Arm64 VM in GCP (SUSE): + +Latency (ms): + +| Metric | 2.5% | 50% (Median) | 97.5% | 99% | Avg | Stdev | Max | +|----------|------|--------------|-------|-----|------|-------|------| +| Latency | 1 | 1 | 3 | 3 | 1.2 | 0.62 | 24 | + +Throughput: + +| Metric | 1% | 2.5% | 50% | 97.5% | Avg | Stdev | Min | +|------------|--------|--------|---------|---------|----------|----------|---------| +| Req/Sec | 45,279 | 45,279 | 54,719 | 55,199 | 53,798.4 | 2,863.96 | 45,257 | +| Bytes/Sec | 8.78 MB| 8.78 MB| 10.6 MB | 10.7 MB | 10.4 MB | 557 kB | 8.78 MB | + +### Node.js performance benchmarking comparison on Arm64 and x86_64 +When you compare the benchmarking results, you will notice that on the Google Axion C4A Arm-based instances: + +- Average latency is very low (~1.2 ms) with consistent response times. +- Maximum latency spikes are rare, reaching up to 24 ms. +- The server handles high throughput, averaging ~53,798 requests/sec. +- Data transfer rate averages 10.4 MB/sec, demonstrating efficient performance under load. diff --git a/content/learning-paths/servers-and-cloud-computing/node-js-gcp/images/gcp-shell.png b/content/learning-paths/servers-and-cloud-computing/node-js-gcp/images/gcp-shell.png new file mode 100644 index 0000000000..7e2fc3d1b5 Binary files /dev/null and b/content/learning-paths/servers-and-cloud-computing/node-js-gcp/images/gcp-shell.png differ diff --git a/content/learning-paths/servers-and-cloud-computing/node-js-gcp/images/gcp-ssh.png b/content/learning-paths/servers-and-cloud-computing/node-js-gcp/images/gcp-ssh.png new file mode 100644 index 0000000000..597ccd7fea Binary files /dev/null and b/content/learning-paths/servers-and-cloud-computing/node-js-gcp/images/gcp-ssh.png differ diff --git a/content/learning-paths/servers-and-cloud-computing/node-js-gcp/images/gcp-vm.png b/content/learning-paths/servers-and-cloud-computing/node-js-gcp/images/gcp-vm.png new file mode 100644 index 0000000000..0d1072e20d Binary files /dev/null and b/content/learning-paths/servers-and-cloud-computing/node-js-gcp/images/gcp-vm.png differ diff --git a/content/learning-paths/servers-and-cloud-computing/node-js-gcp/images/node-browser.png b/content/learning-paths/servers-and-cloud-computing/node-js-gcp/images/node-browser.png new file mode 100644 index 0000000000..9435eea94b Binary files /dev/null and b/content/learning-paths/servers-and-cloud-computing/node-js-gcp/images/node-browser.png differ diff --git a/content/learning-paths/servers-and-cloud-computing/node-js-gcp/installation.md b/content/learning-paths/servers-and-cloud-computing/node-js-gcp/installation.md new file mode 100644 index 0000000000..344ae9471c --- /dev/null +++ b/content/learning-paths/servers-and-cloud-computing/node-js-gcp/installation.md @@ -0,0 +1,63 @@ +--- +title: Install Node.js Using Node Version Manager +weight: 4 + +### FIXED, DO NOT MODIFY +layout: learningpathall +--- + +## Install Node.js with Node Version Manager (NVM) +This guide walks you through installing **NodeJS** via the Node Version Manager (NVM). NVM is a powerful tool that allows users to specify which version of **NodeJS** that they want to use. NVM will then download and install the requested vesion using the **NodeJS** official packages. + +### 1. Install Node Version Manager (NVM) +First, we will run this command to download and install NVM into our VM instance: + +```console +curl -o- https://raw.githubusercontent.com/nvm-sh/nvm/v0.39.7/install.sh | bash +``` + +Next, we have to activate NVM in our terminal shell. We can manually activate our current shell via copy and paste of the following into the shell: + +```console +export NVM_DIR="$HOME/.nvm" +[ -s "$NVM_DIR/nvm.sh" ] && \. "$NVM_DIR/nvm.sh" # This loads nvm +[ -s "$NVM_DIR/bash_completion" ] && \. "$NVM_DIR/bash_completion" # This loads nvm bash_completion +``` + +You should be able to confirm that NVM is available by typing: + +```console +nvm --version +``` + +### 2. Install NodeJS +Now that NVM is installed, we simply type the following commands in our shell to download and install **NodeJS**: + +```console +nvm install v24 +nvm use v24 +``` + +Additionally, we can add this command to the bottom of our $HOME/.bashrc file: + +```console +echo 'nvm use v24' >> ~/.bashrc +``` + +### 3. Verify Installation +Check that Node.js and npm (Node’s package manager) are installed correctly. + +You should be able to confirm that **NodeJS** is now installed and available! + +```console +node --version +npm --version +``` + +You should see an output similar to: +```output +v24.10.0 +11.6.1 +``` + +Node.js installation is complete. You can now proceed with the baseline testing. diff --git a/content/learning-paths/servers-and-cloud-computing/node-js-gcp/instance.md b/content/learning-paths/servers-and-cloud-computing/node-js-gcp/instance.md new file mode 100644 index 0000000000..fc33e92cfe --- /dev/null +++ b/content/learning-paths/servers-and-cloud-computing/node-js-gcp/instance.md @@ -0,0 +1,39 @@ +--- +title: Create a Google Axion C4A Arm virtual machine on GCP +weight: 3 + +### FIXED, DO NOT MODIFY +layout: learningpathall +--- + +## Overview + +In this section, you will learn how to provision a Google Axion C4A Arm virtual machine on Google Cloud Platform (GCP) using the `c4a-standard-4` (4 vCPUs, 16 GB memory) machine type in the Google Cloud Console. + +{{% notice Note %}} +For support on GCP setup, see the Learning Path [Getting started with Google Cloud Platform](https://learn.arm.com/learning-paths/servers-and-cloud-computing/csp/google/). +{{% /notice %}} + +## Provision a Google Axion C4A Arm VM in Google Cloud Console + +To create a virtual machine based on the C4A instance type: +- Navigate to the [Google Cloud Console](https://console.cloud.google.com/). +- Go to **Compute Engine > VM Instances** and select **Create Instance**. +- Under **Machine configuration**: + - Populate fields such as **Instance name**, **Region**, and **Zone**. + - Set **Series** to `C4A`. + - Select `c4a-standard-4` for machine type. + + ![Create a Google Axion C4A Arm virtual machine in the Google Cloud Console with c4a-standard-4 selected alt-text#center](images/gcp-vm.png "Creating a Google Axion C4A Arm virtual machine in Google Cloud Console") + + +- Under **OS and Storage**, select **Change**, then choose an Arm64-based OS image. For this Learning Path, use **SUSE Linux Enterprise Server**. Select "Pay As You Go" for the license type. Click **Select**. +- Under **Networking**, enable **Allow HTTP traffic**. +- Click **Create** to launch the instance. +- Once created, you should see a "SSH" option to the right in your list of VM instances. Click on this to launch a SSH shell into your VM instance: + +![Invoke a SSH session via your browser alt-text#center](images/gcp-ssh.png "Invoke a SSH session into your running VM instance") + +- A window from your browser should come up and you should now see a shell into your VM instance: + +![Terminal Shell in your VM instance alt-text#center](images/gcp-shell.png "Terminal shell in your VM instance") diff --git a/content/learning-paths/servers-and-cloud-computing/onnx-on-azure/_index.md b/content/learning-paths/servers-and-cloud-computing/onnx-on-azure/_index.md index da8b5b5949..fe5d7bd7c6 100644 --- a/content/learning-paths/servers-and-cloud-computing/onnx-on-azure/_index.md +++ b/content/learning-paths/servers-and-cloud-computing/onnx-on-azure/_index.md @@ -1,23 +1,19 @@ --- title: Deploy SqueezeNet 1.0 INT8 model with ONNX Runtime on Azure Cobalt 100 -draft: true -cascade: - draft: true - + minutes_to_complete: 60 -who_is_this_for: This Learning Path introduces ONNX deployment on Microsoft Azure Cobalt 100 (Arm-based) virtual machines. It is designed for developers deploying ONNX-based applications on Arm-based machines. +who_is_this_for: This Learning Path is for developers deploying ONNX-based applications on Arm-based machines. learning_objectives: - - Provision an Azure Arm64 virtual machine using Azure console, with Ubuntu Pro 24.04 LTS as the base image. - - Deploy ONNX on the Ubuntu Pro virtual machine. - - Perform ONNX baseline testing and benchmarking on Arm64 virtual machines. + - Provision an Azure Arm64 virtual machine using Azure console, with Ubuntu Pro 24.04 LTS as the base image + - Perform ONNX baseline testing and benchmarking on Arm64 virtual machines prerequisites: - - A [Microsoft Azure](https://azure.microsoft.com/) account with access to Cobalt 100 based instances (Dpsv6). - - Basic understanding of Python and machine learning concepts. - - Familiarity with [ONNX Runtime](https://onnxruntime.ai/docs/) and Azure cloud services. + - A [Microsoft Azure](https://azure.microsoft.com/) account with access to Cobalt 100 based instances (Dpsv6) + - Basic understanding of Python and machine learning concepts + - Familiarity with [ONNX Runtime](https://onnxruntime.ai/docs/) and Azure cloud services author: Pareena Verma diff --git a/content/learning-paths/servers-and-cloud-computing/onnx-on-azure/background.md b/content/learning-paths/servers-and-cloud-computing/onnx-on-azure/background.md index 1aef38fe2f..9f1bb80efc 100644 --- a/content/learning-paths/servers-and-cloud-computing/onnx-on-azure/background.md +++ b/content/learning-paths/servers-and-cloud-computing/onnx-on-azure/background.md @@ -8,14 +8,33 @@ layout: "learningpathall" ## Azure Cobalt 100 Arm-based processor -Azure’s Cobalt 100 is built on Microsoft's first-generation, in-house Arm-based processor: the Cobalt 100. Designed entirely by Microsoft and based on Arm’s Neoverse N2 architecture, this 64-bit CPU delivers improved performance and energy efficiency across a broad spectrum of cloud-native, scale-out Linux workloads. These include web and application servers, data analytics, open-source databases, caching systems, and more. Running at 3.4 GHz, the Cobalt 100 processor allocates a dedicated physical core for each vCPU, ensuring consistent and predictable performance. -To learn more about Cobalt 100, refer to the blog [Announcing the preview of new Azure virtual machine based on the Azure Cobalt 100 processor](https://techcommunity.microsoft.com/blog/azurecompute/announcing-the-preview-of-new-azure-vms-based-on-the-azure-cobalt-100-processor/4146353). +Azure’s Cobalt 100 is built on Microsoft's first-generation, in-house Arm-based processor, the Cobalt 100. Designed entirely by Microsoft and based on Arm’s Neoverse N2 architecture, it is a 64-bit CPU that delivers improved performance and energy efficiency across a broad spectrum of cloud-native, scale-out Linux workloads. + +You can use Cobalt 100 for: + +- Web and application servers +- Data analytics +- Open-source databases +- Caching systems +- Many other scale-out workloads + +Running at 3.4 GHz, the Cobalt 100 processor allocates a dedicated physical core for each vCPU, ensuring consistent and predictable performance. You can learn more about Cobalt 100 in the Microsoft blog [Announcing the preview of new Azure virtual machine based on the Azure Cobalt 100 processor](https://techcommunity.microsoft.com/blog/azurecompute/announcing-the-preview-of-new-azure-vms-based-on-the-azure-cobalt-100-processor/4146353). ## ONNX -ONNX (Open Neural Network Exchange) is an open-source format designed for representing machine learning models. -It provides interoperability between different deep learning frameworks, enabling models trained in one framework (such as PyTorch or TensorFlow) to be deployed and run in another. -ONNX models are serialized into a standardized format that can be executed by the ONNX Runtime, a high-performance inference engine optimized for CPU, GPU, and specialized hardware accelerators. This separation of model training and inference allows developers to build flexible, portable, and production-ready AI workflows. +ONNX (Open Neural Network Exchange) is an open-source format designed for representing machine learning models. + +You can use ONNX to: + +- Move models between different deep learning frameworks, such as PyTorch and TensorFlow +- Deploy models trained in one framework to run in another +- Build flexible, portable, and production-ready AI workflows + +ONNX models are serialized into a standardized format that you can execute with ONNX Runtime - a high-performance inference engine optimized for CPU, GPU, and specialized hardware accelerators. This separation of model training and inference lets you deploy models efficiently across cloud, edge, and mobile environments. + +To learn more, see the [ONNX official website](https://onnx.ai/) and the [ONNX Runtime documentation](https://onnxruntime.ai/docs/). + +## Next steps for ONNX on Azure Cobalt 100 -ONNX is widely used in cloud, edge, and mobile environments to deliver efficient and scalable inference for deep learning models. Learn more from the [ONNX official website](https://onnx.ai/) and the [ONNX Runtime documentation](https://onnxruntime.ai/docs/). +Now that you understand the basics of Azure Cobalt 100 and ONNX Runtime, you are ready to deploy and benchmark ONNX models on Arm-based Azure virtual machines. This Learning Path will guide you step by step through setting up an Azure Cobalt 100 VM, installing ONNX Runtime, and running machine learning inference on Arm64 infrastructure. diff --git a/content/learning-paths/servers-and-cloud-computing/onnx-on-azure/baseline.md b/content/learning-paths/servers-and-cloud-computing/onnx-on-azure/baseline.md index 08f727beb4..2c7a50e4da 100644 --- a/content/learning-paths/servers-and-cloud-computing/onnx-on-azure/baseline.md +++ b/content/learning-paths/servers-and-cloud-computing/onnx-on-azure/baseline.md @@ -36,19 +36,24 @@ python3 baseline.py You should see output similar to: ```output Inference time: 0.0026061534881591797 -``` -{{% notice Note %}}Inference time is the amount of time it takes for a trained machine learning model to make a prediction (i.e., produce output) after receiving input data. -input tensor of shape (1, 3, 224, 224): -- 1: batch size -- 3: color channels (RGB) -- 224 x 224: image resolution (common for models like SqueezeNet) +{{% notice Note %}} +Inference time is how long it takes for a trained machine learning model to make a prediction after it receives input data. + +The input tensor shape `(1, 3, 224, 224)` means: +- `1`: One image is processed at a time (batch size) +- `3`: Three color channels (red, green, blue) +- `224 x 224`: Each image is 224 pixels wide and 224 pixels tall (standard for SqueezeNet) {{% /notice %}} This indicates the model successfully executed a single forward pass through the SqueezeNet INT8 ONNX model and returned results. -#### Output summary: +## Output summary: Single inference latency(0.00260 sec): This is the time required for the model to process one input image and produce an output. The first run includes graph loading, memory allocation, and model initialization overhead. Subsequent inferences are usually faster due to caching and optimized execution. This demonstrates that the setup is fully working, and ONNX Runtime efficiently executes quantized models on Arm64. + +Great job! You've completed your first ONNX Runtime inference on Arm-based Azure infrastructure. This baseline test confirms your environment is set up correctly and ready for more advanced benchmarking. + +Next, you'll use a dedicated benchmarking tool to capture more detailed performance statistics and further optimize your deployment. diff --git a/content/learning-paths/servers-and-cloud-computing/onnx-on-azure/benchmarking.md b/content/learning-paths/servers-and-cloud-computing/onnx-on-azure/benchmarking.md index d3a18d7050..5a429269e8 100644 --- a/content/learning-paths/servers-and-cloud-computing/onnx-on-azure/benchmarking.md +++ b/content/learning-paths/servers-and-cloud-computing/onnx-on-azure/benchmarking.md @@ -1,19 +1,25 @@ --- -title: Benchmarking via onnxruntime_perf_test +title: Benchmark ONNX runtime performance with onnxruntime_perf_test weight: 6 ### FIXED, DO NOT MODIFY layout: learningpathall --- -Now that you have validated ONNX Runtime with Python-based timing (e.g., SqueezeNet baseline test), you can move to using a dedicated benchmarking utility called `onnxruntime_perf_test`. This tool is designed for systematic performance evaluation of ONNX models, allowing you to capture more detailed statistics than simple Python timing. -This helps evaluate the ONNX Runtime efficiency on Azure Arm64-based Cobalt 100 instances and other x86_64 instances. architectures. +## Benchmark ONNX model inference on Azure Cobalt 100 +Now that you have validated ONNX Runtime with Python-based timing (for example, the SqueezeNet baseline test), you can move to using a dedicated benchmarking utility called `onnxruntime_perf_test`. This tool is designed for systematic performance evaluation of ONNX models, allowing you to capture more detailed statistics than simple Python timing. + +This approach helps you evaluate ONNX Runtime efficiency on Azure Arm64-based Cobalt 100 instances and compare results with other architectures if needed. + +You are ready to run benchmarks, which is a key skill for optimizing real-world deployments. + ## Run the performance tests using onnxruntime_perf_test -The `onnxruntime_perf_test` is a performance benchmarking tool included in the ONNX Runtime source code. It is used to measure the inference performance of ONNX models and supports multiple execution providers (like CPU, GPU, or other execution providers). on Arm64 VMs, CPU execution is the focus. +The `onnxruntime_perf_test` tool is included in the ONNX Runtime source code. You can use it to measure the inference performance of ONNX models and compare different execution providers (such as CPU or GPU). On Arm64 VMs, CPU execution is the focus. -### Install Required Build Tools -Before building or running `onnxruntime_perf_test`, you will need to install a set of development tools and libraries. These packages are required for compiling ONNX Runtime and handling model serialization via Protocol Buffers. + +## Install required build tools +Before building or running `onnxruntime_perf_test`, you need to install a set of development tools and libraries. These packages are required for compiling ONNX Runtime and handling model serialization via Protocol Buffers. ```console sudo apt update @@ -29,35 +35,48 @@ You should see output similar to: ```output libprotoc 3.21.12 ``` -### Build ONNX Runtime from Source: +## Build ONNX Runtime from source -The benchmarking tool `onnxruntime_perf_test`, isn’t available as a pre-built binary for any platform. So, you will have to build it from the source, which is expected to take around 40 minutes. +The benchmarking tool `onnxruntime_perf_test` isn’t available as a pre-built binary for any platform, so you will need to build it from source. This process can take up to 40 minutes. -Clone onnxruntime repo: +Clone the ONNX Runtime repository: ```console git clone --recursive https://github.com/microsoft/onnxruntime cd onnxruntime ``` + Now, build the benchmark tool: ```console ./build.sh --config Release --build_dir build/Linux --build_shared_lib --parallel --build --update --skip_tests ``` -You should see the executable at: +If the build completes successfully, you should see the executable at: ```output ./build/Linux/Release/onnxruntime_perf_test ``` -### Run the benchmark + +## Run the benchmark Now that you have built the benchmarking tool, you can run inference benchmarks on the SqueezeNet INT8 model: ```console ./build/Linux/Release/onnxruntime_perf_test -e cpu -r 100 -m times -s -Z -I ../squeezenet-int8.onnx ``` + Breakdown of the flags: - -e cpu → Use the CPU execution provider. - -r 100 → Run 100 inference passes for statistical reliability. - -m times → Run in “repeat N times” mode. Useful for latency-focused measurement. + +- `-e cpu`: use the CPU execution provider. +- `-r 100`: run 100 inference passes for statistical reliability. +- `-m times`: run in “repeat N times” mode for latency-focused measurement. +- `-s`: print summary statistics after the run. +- `-Z`: disable memory arena for more consistent timing. +- `-I ../squeezenet-int8.onnx`: path to your ONNX model file. + +You should see output with latency and throughput statistics. If you encounter build errors, check that you have enough memory (at least 8 GB recommended) and all dependencies are installed. For missing dependencies, review the installation steps above. + +If the benchmark runs successfully, you are ready to analyze and optimize your ONNX model performance on Arm-based Azure infrastructure. + +Well done! You have completed a full benchmarking workflow. Continue to the next section to explore further optimizations or advanced deployment scenarios. -s → Show detailed per-run statistics (latency distribution). -Z → Disable intra-op thread spinning. Reduces CPU waste when idle between runs, especially on high-core systems like Cobalt 100. -I → Input the ONNX model path directly, skipping pre-generated test data. @@ -86,17 +105,17 @@ P95 Latency: 0.00187393 s P99 Latency: 0.00190312 s P999 Latency: 0.00190312 s ``` -### Benchmark Metrics Explained +## Benchmark Metrics Explained - * Average Inference Time: The mean time taken to process a single inference request across all runs. Lower values indicate faster model execution. - * Throughput: The number of inference requests processed per second. Higher throughput reflects the model’s ability to handle larger workloads efficiently. - * CPU Utilization: The percentage of CPU resources used during inference. A value close to 100% indicates full CPU usage, which is expected during performance benchmarking. - * Peak Memory Usage: The maximum amount of system memory (RAM) consumed during inference. Lower memory usage is beneficial for resource-constrained environments. - * P50 Latency (Median Latency): The time below which 50% of inference requests complete. Represents typical latency under normal load. - * Latency Consistency: Describes the stability of latency values across all runs. "Consistent" indicates predictable inference performance with minimal jitter. + * Average inference time: the mean time taken to process a single inference request across all runs. Lower values indicate faster model execution. + * Throughput: the number of inference requests processed per second. Higher throughput reflects the model’s ability to handle larger workloads efficiently. + * CPU utilization: the percentage of CPU resources used during inference. A value close to 100% indicates full CPU usage, which is expected during performance benchmarking. + * Peak Memory Usage: the maximum amount of system memory (RAM) consumed during inference. Lower memory usage is beneficial for resource-constrained environments. + * P50 Latency (Median Latency): the time below which 50% of inference requests complete. Represents typical latency under normal load. + * Latency Consistency: describes the stability of latency values across all runs. "Consistent" indicates predictable inference performance with minimal jitter. -### Benchmark summary on Arm64: -Here is a summary of benchmark results collected on an Arm64 **D4ps_v6 Ubuntu Pro 24.04 LTS virtual machine**. +## Benchmark summary on Arm64: +Here is a summary of benchmark results collected on an Arm64 D4ps_v6 Ubuntu Pro 24.04 LTS virtual machine. | **Metric** | **Value** | |----------------------------|-------------------------------| @@ -113,12 +132,9 @@ Here is a summary of benchmark results collected on an Arm64 **D4ps_v6 Ubuntu Pr | **Latency Consistency** | Consistent | -### Highlights from Benchmarking on Azure Cobalt 100 Arm64 VMs +## Highlights from Benchmarking on Azure Cobalt 100 Arm64 VMs + -The results on Arm64 virtual machines demonstrate: -- Low-Latency Inference: Achieved consistent average inference times of ~1.86 ms on Arm64. -- Strong and Stable Throughput: Sustained throughput of over 538 inferences/sec using the `squeezenet-int8.onnx` model on D4ps_v6 instances. -- Lightweight Resource Footprint: Peak memory usage stayed below 37 MB, with CPU utilization around 96%, ideal for efficient edge or cloud inference. -- Consistent Performance: P50, P95, and Max latency remained tightly bound, showcasing reliable performance on Azure Cobalt 100 Arm-based infrastructure. +These results on Arm64 virtual machines demonstrate low-latency inference, with consistent average inference times of approximately 1.86 ms. Throughput remains strong and stable, sustaining over 538 inferences per second using the `squeezenet-int8.onnx` model on D4ps_v6 instances. The resource footprint is lightweight, as peak memory usage stays below 37 MB and CPU utilization is around 96%, making this setup ideal for efficient edge or cloud inference. Performance is also consistent, with P50, P95, and maximum latency values tightly grouped, showcasing reliable results on Azure Cobalt 100 Arm-based infrastructure. You have now successfully benchmarked inference time of ONNX models on an Azure Cobalt 100 Arm64 virtual machine. diff --git a/content/learning-paths/servers-and-cloud-computing/onnx-on-azure/create-instance.md b/content/learning-paths/servers-and-cloud-computing/onnx-on-azure/create-instance.md index 420b6ea4b8..31f2fb1e30 100644 --- a/content/learning-paths/servers-and-cloud-computing/onnx-on-azure/create-instance.md +++ b/content/learning-paths/servers-and-cloud-computing/onnx-on-azure/create-instance.md @@ -1,12 +1,12 @@ --- -title: Create an Arm-based Azure VM with Cobalt 100 +title: Create an Arm-based Azure Cobalt 100 virtual machine weight: 3 ### FIXED, DO NOT MODIFY layout: learningpathall --- -## Set up your development environment +## Set up your Arm-based Azure Cobalt 100 virtual machine There is more than one way to create an Arm-based Cobalt 100 virtual machine: @@ -20,37 +20,46 @@ You will focus on the general-purpose virtual machines in the D-series. For furt While the steps to create this instance are included here for convenience, for further information on setting up Cobalt on Azure, see [Deploy a Cobalt 100 virtual machine on Azure Learning Path](/learning-paths/servers-and-cloud-computing/cobalt/). -#### Create an Arm-based Azure Virtual Machine +## Create an Arm-based Azure Virtual Machine -Creating a virtual machine based on Azure Cobalt 100 is no different from creating any other virtual machine in Azure. To create an Azure virtual machine, launch the Azure portal and navigate to "Virtual Machines". -1. Select "Create", and click on "Virtual Machine" from the drop-down list. -2. Inside the "Basic" tab, fill in the Instance details such as "Virtual machine name" and "Region". -3. Choose the image for your virtual machine (for example, Ubuntu Pro 24.04 LTS) and select “Arm64” as the VM architecture. -4. In the “Size” field, click on “See all sizes” and select the D-Series v6 family of virtual machines. Select “D4ps_v6” from the list. -![Azure portal VM creation — Azure Cobalt 100 Arm64 virtual machine (D4ps_v6) alt-text#center](images/instance.png "Figure 1: Select the D-Series v6 family of virtual machines") -5. Select "SSH public key" as an Authentication type. Azure will automatically generate an SSH key pair for you and allow you to store it for future use. It is a fast, simple, and secure way to connect to your virtual machine. -6. Fill in the Administrator username for your VM. -7. Select "Generate new key pair", and select "RSA SSH Format" as the SSH Key Type. RSA could offer better security with keys longer than 3072 bits. Give a Key pair name to your SSH key. -8. In the "Inbound port rules", select HTTP (80) and SSH (22) as the inbound ports. +To launch an Arm-based virtual machine on Azure, you will use the Azure portal to create a Linux VM powered by the Cobalt 100 processor. This process is similar to creating any other Azure VM, but you will specifically select the Arm64 architecture and the D-Series v6 (D4ps_v6) size for optimal performance on Arm. -![Azure portal VM creation — Azure Cobalt 100 Arm64 virtual machine (D4ps_v6) alt-text#center](images/instance1.png "Figure 2: Allow inbound port rules") +Follow these steps to deploy a Linux-based Azure Cobalt 100 VM: -9. Click on the "Review + Create" tab and review the configuration for your virtual machine. It should look like the following: +- Select **Create**, and click on **Virtual Machine** from the drop-down list. +- Inside the **Basic** tab, fill in the instance details such as **Virtual machine name** and **Region**. +- Choose the image for your virtual machine (for example, Ubuntu Pro 24.04 LTS) and select **Arm64** as the VM architecture. +- In the **Size** field, click on **See all sizes** and select the D-Series v6 family of virtual machines. Select **D4ps_v6** from the list. -![Azure portal VM creation — Azure Cobalt 100 Arm64 virtual machine (D4ps_v6) alt-text#center](images/ubuntu-pro.png "Figure 3: Review and Create an Azure Cobalt 100 Arm64 VM") +![Azure portal VM creation — Azure Cobalt 100 Arm64 virtual machine (D4ps_v6) alt-text#center](images/instance.png "Select the D-Series v6 family of virtual machines") -10. Finally, when you are confident about your selection, click on the "Create" button, and click on the "Download Private key and Create Resources" button. +- Select **SSH public key** as an Authentication type. Azure will automatically generate an SSH key pair for you and allow you to store it for future use. It is a fast, simple, and secure way to connect to your virtual machine. +- Fill in the **Administrator username** for your VM. +- Select **Generate new key pair**, and select **RSA SSH Format** as the SSH Key Type. RSA could offer better security with keys longer than 3072 bits. Give a **Key pair name** to your SSH key. +- In the **Inbound port rules**, select **HTTP (80)** and **SSH (22)** as the inbound ports. -![Azure portal VM creation — Azure Cobalt 100 Arm64 virtual machine (D4ps_v6) alt-text#center](images/instance4.png "Figure 4: Download Private key and Create Resources") +![Azure portal VM creation — Azure Cobalt 100 Arm64 virtual machine (D4ps_v6) alt-text#center](images/instance1.png "Allow inbound port rules") -11. Your virtual machine should be ready and running within no time. You can SSH into the virtual machine using the private key, along with the Public IP details. +Click on the **Review + Create** tab and review the configuration for your virtual machine. It should look like the following: -![Azure portal VM creation — Azure Cobalt 100 Arm64 virtual machine (D4ps_v6) alt-text#center](images/final-vm.png "Figure 5: VM deployment confirmation in Azure portal") +![Azure portal VM creation — Azure Cobalt 100 Arm64 virtual machine (D4ps_v6) alt-text#center](images/ubuntu-pro.png "Review and Create an Azure Cobalt 100 Arm64 VM") + +When you are confident about your selection, click on the **Create** button, and click on the **Download Private key and Create Resources** button. + +![Azure portal VM creation — Azure Cobalt 100 Arm64 virtual machine (D4ps_v6) alt-text#center](images/instance4.png "Download Private key and Create Resources") + +Your virtual machine should be ready and running within a few minutes. You can SSH into the virtual machine using the private key, along with the Public IP details. -{{% notice Note %}} -To learn more about Arm-based virtual machine in Azure, refer to “Getting Started with Microsoft Azure” in [Get started with Arm-based cloud instances](/learning-paths/servers-and-cloud-computing/csp/azure). +You should see your Arm-based Azure Cobalt 100 VM listed as **Running** in the Azure portal. If you have trouble connecting, double-check your SSH key and ensure the correct ports are open. If the VM creation fails, check your Azure quota, region availability, or try a different VM size. For more troubleshooting tips, see the [Deploy a Cobalt 100 virtual machine on Azure Learning Path](/learning-paths/servers-and-cloud-computing/cobalt/). + +Nice work! You have successfully provisioned an Arm-based Azure Cobalt 100 virtual machine. This setup is ideal for deploying Linux workloads, running ONNX Runtime, and benchmarking machine learning models on Arm64 infrastructure. You are now ready to continue with ONNX Runtime installation and performance testing in the next steps. + +![Azure portal VM creation - Azure Cobalt 100 Arm64 virtual machine (D4ps_v6) alt-text#center](images/final-vm.png "VM deployment confirmation in Azure portal") + +{{% notice Note %}} +For further information or alternative setup options, see “Getting Started with Microsoft Azure” in [Get started with Arm-based cloud instances](/learning-paths/servers-and-cloud-computing/csp/azure). {{% /notice %}} diff --git a/content/learning-paths/servers-and-cloud-computing/onnx-on-azure/deploy.md b/content/learning-paths/servers-and-cloud-computing/onnx-on-azure/deploy.md index ed9ff8e35e..a9550eefd7 100644 --- a/content/learning-paths/servers-and-cloud-computing/onnx-on-azure/deploy.md +++ b/content/learning-paths/servers-and-cloud-computing/onnx-on-azure/deploy.md @@ -8,14 +8,18 @@ layout: learningpathall ## ONNX Installation on Azure Ubuntu Pro 24.04 LTS -To work with ONNX models on Azure, you will need a clean Python environment with the required packages. The following steps install Python, set up a virtual environment, and prepare for ONNX model execution using ONNX Runtime. +To work with ONNX models on Azure, you will need a clean Python environment with the required packages. The following steps show you how to install Python, set up a virtual environment, and prepare for ONNX model execution using ONNX Runtime. -### Install Python and Virtual Environment: + +## Install Python and virtual environment + +To get started, update your package list and install Python 3 along with the tools needed to create a virtual environment: ```console sudo apt update -sudo apt install -y python3 python3-pip python3-virtualenv python3-venv +sudo apt install -y python3 python3-pip python3-venv ``` + Create and activate a virtual environment: ```console @@ -24,28 +28,35 @@ source onnx-env/bin/activate ``` {{% notice Note %}}Using a virtual environment isolates ONNX and its dependencies to avoid system conflicts.{{% /notice %}} -### Install ONNX and Required Libraries: +Once your environment is active, you're ready to install the required libraries. + + +## Install ONNX and required libraries Upgrade pip and install ONNX with its runtime and supporting libraries: ```console pip install --upgrade pip pip install onnx onnxruntime fastapi uvicorn numpy ``` -This installs ONNX libraries along with FastAPI (web serving) and NumPy (for input tensor generation). +This installs ONNX libraries, FastAPI (for web serving, if you want to deploy models as an API), Uvicorn (ASGI server for FastAPI), and NumPy (for input tensor generation). + +If you encounter errors during installation, check your internet connection and ensure you are using the activated virtual environment. For missing dependencies, try updating pip or installing system packages as needed. + +After installation, you're ready to validate your setup. -### Validate ONNX and ONNX Runtime: -Once the libraries are installed, you should verify that both ONNX and ONNX Runtime are correctly set up on your VM. + +## Validate ONNX and ONNX Runtime +Once the libraries are installed, verify that both ONNX and ONNX Runtime are correctly set up on your VM. Create a file named `version.py` with the following code: ```python import onnx import onnxruntime -print("ONNX version:", onnx.__version__) -print("ONNX Runtime version:", onnxruntime.__version__) +print("ONNX version:", onnx.__version__) +print("ONNX Runtime version:", onnxruntime.__version__) ``` -Run the script: - +Run the script: ```console python3 version.py ``` @@ -54,15 +65,20 @@ You should see output similar to: ONNX version: 1.19.0 ONNX Runtime version: 1.23.0 ``` -With this validation, you have confirmed that ONNX and ONNX Runtime are installed and ready on your Azure Cobalt 100 VM. This is the foundation for running inference workloads and serving ONNX models. +If you see version numbers for both ONNX and ONNX Runtime, your environment is ready. If you get an ImportError, double-check that your virtual environment is activated and the libraries are installed. + +Great job! You have confirmed that ONNX and ONNX Runtime are installed and ready on your Azure Cobalt 100 VM. This is the foundation for running inference workloads and serving ONNX models. + + +## Download and validate ONNX model: SqueezeNet +SqueezeNet is a lightweight convolutional neural network (CNN) architecture designed to provide accuracy close to AlexNet while using 50x fewer parameters and a much smaller model size. This makes it well-suited for benchmarking ONNX Runtime. -### Download and Validate ONNX Model - SqueezeNet: -SqueezeNet is a lightweight convolutional neural network (CNN) architecture designed to provide accuracy close to AlexNet while using 50x fewer parameters and a much smaller model size. This makes it well-suited for benchmarking ONNX Runtime. +Now that your environment is set up and validated, you're ready to download and test the SqueezeNet model in the next step. Download the quantized model: ```console wget https://github.com/onnx/models/raw/main/validated/vision/classification/squeezenet/model/squeezenet1.0-12-int8.onnx -O squeezenet-int8.onnx ``` -#### Validate the model: +## Validate the model: After downloading the SqueezeNet ONNX model, the next step is to confirm that it is structurally valid and compliant with the ONNX specification. ONNX provides a built-in checker utility that verifies the graph, operators, and metadata. Create a file named `validation.py` with the following code: diff --git a/content/learning-paths/servers-and-cloud-computing/php-on-gcp/_index.md b/content/learning-paths/servers-and-cloud-computing/php-on-gcp/_index.md new file mode 100644 index 0000000000..f421de80f2 --- /dev/null +++ b/content/learning-paths/servers-and-cloud-computing/php-on-gcp/_index.md @@ -0,0 +1,63 @@ +--- +title: Deploy PHP on Google Cloud C4A (Arm-based Axion VMs) + +draft: true +cascade: + draft: true + +minutes_to_complete: 30 + +who_is_this_for: This is an introductory topic for software developers migrating PHP workloads from x86_64 to Arm-based servers, specifically on Google Cloud C4A virtual machines built on Axion processors. + + +learning_objectives: + - Provision a SUSE SLES virtual machine on Google Cloud C4A (Arm-based Axion VM) + - Install PHP on a SUSE Arm64 (C4A) instance + - Validate PHP functionality with baseline HTTP server tests + - Benchmark PHP performance using PHPBench on Arm64 architecture + + +prerequisites: + - A [Google Cloud Platform (GCP)](https://cloud.google.com/free) account with billing enabled + - Basic familiarity with web servers and PHP scripting +author: Pareena Verma + +##### Tags +skilllevels: Introductory +subjects: Web +cloud_service_providers: Google Cloud + +armips: + - Neoverse + +tools_software_languages: + - PHP + - apache + - PHPBench + +operatingsystems: + - Linux + +# ================================================================================ +# FIXED, DO NOT MODIFY +# ================================================================================ +further_reading: + - resource: + title: Google Cloud documentation + link: https://cloud.google.com/docs + type: documentation + + - resource: + title: PHP documentation + link: https://www.php.net/ + type: documentation + + - resource: + title: PHPBench documentation + link: https://github.com/phpbench/phpbench + type: documentation + +weight: 1 +layout: "learningpathall" +learning_path_main_page: "yes" +--- diff --git a/content/learning-paths/servers-and-cloud-computing/php-on-gcp/_next-steps.md b/content/learning-paths/servers-and-cloud-computing/php-on-gcp/_next-steps.md new file mode 100644 index 0000000000..c3db0de5a2 --- /dev/null +++ b/content/learning-paths/servers-and-cloud-computing/php-on-gcp/_next-steps.md @@ -0,0 +1,8 @@ +--- +# ================================================================================ +# FIXED, DO NOT MODIFY THIS FILE +# ================================================================================ +weight: 21 # Set to always be larger than the content in this path to be at the end of the navigation. +title: "Next Steps" # Always the same, html page title. +layout: "learningpathall" # All files under learning paths have this same wrapper for Hugo processing. +--- diff --git a/content/learning-paths/servers-and-cloud-computing/php-on-gcp/background.md b/content/learning-paths/servers-and-cloud-computing/php-on-gcp/background.md new file mode 100644 index 0000000000..8bbb374d12 --- /dev/null +++ b/content/learning-paths/servers-and-cloud-computing/php-on-gcp/background.md @@ -0,0 +1,23 @@ +--- +title: Get started with PHP on Google Axion C4A (Arm Neoverse V2) + +weight: 2 + +layout: "learningpathall" +--- + +## Google Axion C4A Arm instances in Google Cloud + +Google Axion C4A is a family of Arm-based virtual machines built on Google’s custom Axion CPU, which is based on Arm Neoverse V2 cores. Designed for high-performance and energy-efficient computing, these virtual machines offer strong performance for modern cloud workloads such as CI/CD pipelines, microservices, media processing, and general-purpose applications. + +The C4A series provides a cost-effective alternative to x86 virtual machines while leveraging the scalability and performance benefits of the Arm architecture in Google Cloud. + +To learn more about Google Axion, refer to the [Introducing Google Axion Processors, our new Arm-based CPUs](https://cloud.google.com/blog/products/compute/introducing-googles-new-arm-based-cpu) blog. + +## PHP + +PHP (Hypertext Preprocessor) is an open-source, server-side scripting language designed for web development. + +It allows developers to create dynamic web pages, interact with databases, handle forms, and build web applications. PHP can be embedded directly into HTML, making it easy to generate content dynamically on the server before sending it to the browser. + +PHP is widely used for websites, web applications, content management systems (CMS), and APIs. Learn more from the [PHP official website](https://www.php.net/) and its [official documentation](https://www.php.net/docs.php). diff --git a/content/learning-paths/servers-and-cloud-computing/php-on-gcp/baseline.md b/content/learning-paths/servers-and-cloud-computing/php-on-gcp/baseline.md new file mode 100644 index 0000000000..e72b143540 --- /dev/null +++ b/content/learning-paths/servers-and-cloud-computing/php-on-gcp/baseline.md @@ -0,0 +1,167 @@ +--- +title: PHP baseline testing on Google Axion C4A Arm Virtual Machine +weight: 5 + +### FIXED, DO NOT MODIFY +layout: learningpathall +--- + + +## Baseline Setup for PHP-FPM +This section guides you through configuring PHP-FPM (FastCGI Process Manager) on a SUSE Arm-based Google Cloud C4A virtual machine. You will prepare the PHP-FPM pool configuration, verify PHP's FastCGI setup, and later connect it to Apache to confirm end-to-end functionality. + +### Configure the PHP-FPM Pool + +PHP-FPM (FastCGI Process Manager) runs PHP scripts in dedicated worker processes that are independent of the web server. +This design improves performance, security, and fault isolation — especially useful on multi-core Arm-based processors like Google Cloud’s Axion C4A VMs. + +A pool defines a group of PHP worker processes, each serving incoming FastCGI requests. Different applications or virtual hosts can use separate pools for better resource control. + +### Copy the Default Configuration (if missing) + +If your PHP-FPM configuration files don't exist yet (for example, after a minimal installation in this Learning Path), copy the defaults into place using the commands below: + +```console +sudo cp /etc/php8/fpm/php-fpm.d/www.conf.default /etc/php8/fpm/php-fpm.d/www.conf +sudo cp /etc/php8/fpm/php-fpm.conf.default /etc/php8/fpm/php-fpm.conf +``` +These commands: +Create a default pool configuration (www.conf) that controls how PHP-FPM spawns and manages worker processes. +Restore the main FPM service configuration (php-fpm.conf) if it's missing. + +### Edit the Configuration + +Open the PHP-FPM pool configuration file in a text editor: + +```console +sudo vi /etc/php8/fpm/php-fpm.d/www.conf +``` + +Locate the following line: + +```output +listen = 127.0.0.1:9000 +``` +Replace it with the following configuration to use a Unix socket instead of a TCP port: + +```console +listen = /run/php-fpm/www.sock +listen.owner = wwwrun +listen.group = www +listen.mode = 0660 +``` + +Explanation of each directive: +| Directive | Description | +| ----------------------------------------------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| listen = /run/php-fpm/[www.sock](http://www.sock) | Configures PHP-FPM to communicate with Apache using a local Unix socket instead of a TCP port (`127.0.0.1:9000`). This reduces network overhead and improves performance. | +| listen.owner = wwwrun | Sets the owner of the socket file to `wwwrun`, which is the default user that Apache runs as on SUSE systems. This ensures Apache has access to the socket. | +| listen.group = www | Assigns the group ownership of the socket to `www`, aligning with Apache’s default process group for proper access control. | +| listen.mode = 0660 | Defines file permissions so that both the owner (`wwwrun`) and group (`www`) can read and write to the socket. This enables smooth communication between Apache and PHP-FPM. | + + +### Start and Enable PHP-FPM + +After updating the configuration, restart the PHP-FPM service so it picks up the new settings: + +```console +sudo systemctl restart php-fpm +``` +Then, verify that PHP-FPM is running: + +```console +sudo systemctl status php-fpm +``` + +You should see output similar to: + +```output +● php-fpm.service - The PHP FastCGI Process Manager + Loaded: loaded (/usr/lib/systemd/system/php-fpm.service; disabled; vendor preset: disabled) + Active: active (running) since Thu 2025-10-16 13:56:44 UTC; 7s ago + Main PID: 19606 (php-fpm) + Status: "Ready to handle connections" + Tasks: 3 + CPU: 29ms + CGroup: /system.slice/php-fpm.service + ├─ 19606 "php-fpm: master process (/etc/php8/fpm/php-fpm.conf)" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" + ├─ 19607 "php-fpm: pool www" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" ""> + └─ 19608 "php-fpm: pool www" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" ""> + +Oct 16 13:56:44 pareena-php-test systemd[1]: Starting The PHP FastCGI Process Manager... +Oct 16 13:56:44 pareena-php-test systemd[1]: Started The PHP FastCGI Process Manager. +``` +PHP-FPM is now active and ready to process requests via its Unix socket (/run/php-fpm/www.sock). +Next, you will configure Apache to communicate with PHP-FPM, allowing your server to process and serve dynamic PHP pages. + +## Install the Apache PHP8 module +If you prefer to have Apache handle PHP execution directly (instead of using PHP-FPM), you can install the Apache PHP 8 module, which integrates PHP into Apache using the `mod_php` interface: + +```console +sudo zypper install apache2-mod_php8 +``` +Once the module is installed, restart Apache to load the new configuration: + +```console +sudo systemctl restart apache2 +``` +Next, you will test PHP execution by creating a simple PHP page and verifying that Apache can correctly render dynamic content. + +## Test PHP +Now that PHP and Apache are installed, you can verify that everything is working correctly. + +### Create a Test Page +Create a simple PHP file that displays detailed information about your PHP installation: + +```console +echo "" | sudo tee /srv/www/htdocs/info.php +``` +This creates a file named `info.php` inside Apache's web root directory `(/srv/www/htdocs/)`. When you open this file in a browser, it will display the PHP configuration page. + +### Test from Inside the VM +You can verify that PHP and Apache are communicating correctly by testing the web server locally using curl: + +```console +curl http://localhost/info.php +``` +- `curl` fetches the page from the local Apache server. +- If PHP is working, you will see a large block of HTML code as output. This is the rendered output of the phpinfo() function. +- This confirms that Apache successfully passed the request to the PHP interpreter and returned the generated HTML response. + +You should see output similar to: + +```output + + +