diff --git a/sources/academy/webscraping/scraping_basics_javascript2/05_parsing_html.md b/sources/academy/webscraping/scraping_basics_javascript2/05_parsing_html.md
index 6f96ed2c7..f2c22173d 100644
--- a/sources/academy/webscraping/scraping_basics_javascript2/05_parsing_html.md
+++ b/sources/academy/webscraping/scraping_basics_javascript2/05_parsing_html.md
@@ -20,9 +20,9 @@ As a first step, let's try counting how many products are on the listing page.
## Processing HTML
-After downloading, the entire HTML is available in our program as a string. We can print it to the screen or save it to a file, but not much more. However, since it's a string, could we use [string operations](https://docs.python.org/3/library/stdtypes.html#string-methods) or [regular expressions](https://docs.python.org/3/library/re.html) to count the products?
+After downloading, the entire HTML is available in our program as a string. We can print it to the screen or save it to a file, but not much more. However, since it's a string, could we use [string operations](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/String#instance_methods) or [regular expressions](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Guide/Regular_expressions) to count the products?
-While somewhat possible, such an approach is tedious, fragile, and unreliable. To work with HTML, we need a robust tool dedicated to the task: an _HTML parser_. It takes a text with HTML markup and turns it into a tree of Python objects.
+While somewhat possible, such an approach is tedious, fragile, and unreliable. To work with HTML, we need a robust tool dedicated to the task: an _HTML parser_. It takes a text with HTML markup and turns it into a tree of JavaScript objects.
:::info Why regex can't parse HTML
@@ -30,138 +30,192 @@ While [Bobince's infamous StackOverflow answer](https://stackoverflow.com/a/1732
:::
-We'll choose [Beautiful Soup](https://beautiful-soup-4.readthedocs.io/) as our parser, as it's a popular library renowned for its ability to process even non-standard, broken markup. This is useful for scraping, because real-world websites often contain all sorts of errors and discrepancies.
+We'll choose [Cheerio](https://cheerio.js.org/) as our parser, as it's a popular library which can process even non-standard, broken markup. This is useful for scraping, because real-world websites often contain all sorts of errors and discrepancies. In the project directory, we'll run the following to install the Cheerio package:
```text
-$ pip install beautifulsoup4
+$ npm install cheerio --save
+
+added 23 packages, and audited 24 packages in 1s
...
-Successfully installed beautifulsoup4-4.0.0 soupsieve-0.0
```
-Now let's use it for parsing the HTML. The `BeautifulSoup` object allows us to work with the HTML elements in a structured way. As a demonstration, we'll first get the `
` element, which represents the main heading of the page.
+:::tip Installing packages
+
+Being comfortable around installing Node.js packages is a prerequisite of this course, but if you wouldn't say no to a recap, we recommend [An introduction to the npm package manager](https://nodejs.org/en/learn/getting-started/an-introduction-to-the-npm-package-manager) tutorial from the official Node.js documentation.
+
+:::
+
+Now let's import the package and use it for parsing the HTML. The `cheerio` module allows us to work with the HTML elements in a structured way. As a demonstration, we'll first get the `` element, which represents the main heading of the page.

We'll update our code to the following:
-```py
-import httpx
-from bs4 import BeautifulSoup
+```js
+import * as cheerio from 'cheerio';
-url = "https://warehouse-theme-metal.myshopify.com/collections/sales"
-response = httpx.get(url)
-response.raise_for_status()
+const url = "https://warehouse-theme-metal.myshopify.com/collections/sales";
+const response = await fetch(url);
-html_code = response.text
-soup = BeautifulSoup(html_code, "html.parser")
-print(soup.select("h1"))
+if (response.ok) {
+ const html = await response.text();
+ const $ = cheerio.load(html);
+ console.log($("h1"));
+} else {
+ throw new Error(`HTTP ${response.status}`);
+}
```
Then let's run the program:
```text
-$ python main.py
-[Sales
]
+$ node index.js
+LoadedCheerio {
+ '0': [ Element {
+ parent: Element { ... },
+ prev: Text { ... },
+ next: Element { ... },
+ startIndex: null,
+ endIndex: null,
+# highlight-next-line
+ children: [ [Text] ],
+# highlight-next-line
+ name: 'h1',
+ attribs: [Object: null prototype] { class: 'collection__title heading h1' },
+ type: 'tag',
+ namespace: 'http://www.w3.org/1999/xhtml',
+ 'x-attribsNamespace': [Object: null prototype] { class: undefined },
+ 'x-attribsPrefix': [Object: null prototype] { class: undefined }
+ },
+ length: 1,
+ ...
+}
```
-Our code lists all `h1` elements it can find on the page. It's the case that there's just one, so in the result we can see a list with a single item. What if we want to print just the text? Let's change the end of the program to the following:
+Our code prints a Cheerio object. It's something like an array of all `h1` elements Cheerio can find in the HTML we gave it. It's the case that there's just one, so we can see only a single item in the selection.
+
+The item has many properties, such as references to its parent or sibling elements, but most importantly, its name is `h1` and in the `children` property, it contains a single text element. Now let's print just the text. Let's change our program to the following:
+
+```js
+import * as cheerio from 'cheerio';
-```py
-headings = soup.select("h1")
-first_heading = headings[0]
-print(first_heading.text)
+const url = "https://warehouse-theme-metal.myshopify.com/collections/sales";
+const response = await fetch(url);
+
+if (response.ok) {
+ const html = await response.text();
+ const $ = cheerio.load(html);
+ // highlight-next-line
+ console.log($("h1").text());
+} else {
+ throw new Error(`HTTP ${response.status}`);
+}
```
-If we run our scraper again, it prints the text of the first `h1` element:
+Thanks to the nature of the Cheerio object we don't have to explicitly find the first element. Calling `.text()` combines texts of all elements in the selection. If we run our scraper again, it prints the text of the `h1` element:
```text
-$ python main.py
+$ node index.js
Sales
```
:::note Dynamic websites
-The Warehouse returns full HTML in its initial response, but many other sites add content via JavaScript after the page loads or after user interaction. In such cases, what we see in DevTools may differ from `response.text` in Python. Learn how to handle these scenarios in our [API Scraping](../api_scraping/index.md) and [Puppeteer & Playwright](../puppeteer_playwright/index.md) courses.
+The Warehouse returns full HTML in its initial response, but many other sites add some content after the page loads or after user interaction. In such cases, what we'd see in DevTools could differ from `await response.text()` in Node.js. Learn how to handle these scenarios in our [API Scraping](../api_scraping/index.md) and [Puppeteer & Playwright](../puppeteer_playwright/index.md) courses.
:::
## Using CSS selectors
-Beautiful Soup's `.select()` method runs a _CSS selector_ against a parsed HTML document and returns all the matching elements. It's like calling `document.querySelectorAll()` in browser DevTools.
+Cheerio's `$()` method runs a _CSS selector_ against a parsed HTML document and returns all the matching elements. It's like calling `document.querySelectorAll()` in browser DevTools.
-Scanning through [usage examples](https://beautiful-soup-4.readthedocs.io/en/latest/#css-selectors) will help us to figure out code for counting the product cards:
+Scanning through [usage examples](https://cheerio.js.org/docs/basics/selecting) will help us to figure out code for counting the product cards:
-```py
-import httpx
-from bs4 import BeautifulSoup
+```js
+import * as cheerio from 'cheerio';
-url = "https://warehouse-theme-metal.myshopify.com/collections/sales"
-response = httpx.get(url)
-response.raise_for_status()
+const url = "https://warehouse-theme-metal.myshopify.com/collections/sales";
+const response = await fetch(url);
-html_code = response.text
-soup = BeautifulSoup(html_code, "html.parser")
-products = soup.select(".product-item")
-print(len(products))
+if (response.ok) {
+ const html = await response.text();
+ const $ = cheerio.load(html);
+ // highlight-next-line
+ console.log($(".product-item").length);
+} else {
+ throw new Error(`HTTP ${response.status}`);
+}
```
-In CSS, `.product-item` selects all elements whose `class` attribute contains value `product-item`. We call `soup.select()` with the selector and get back a list of matching elements. Beautiful Soup handles all the complexity of understanding the HTML markup for us. On the last line, we use `len()` to count how many items there is in the list.
+In CSS, `.product-item` selects all elements whose `class` attribute contains value `product-item`. We call `$()` with the selector and get back matching elements. Cheerio handles all the complexity of understanding the HTML markup for us. Then we use `.length` to count how many items there is in the selection.
```text
-$ python main.py
+$ node index.js
24
```
That's it! We've managed to download a product listing, parse its HTML, and count how many products it contains. In the next lesson, we'll be looking for a way to extract detailed information about individual products.
+:::info Cheerio and jQuery
+
+The Cheerio documentation frequently mentions something called jQuery. In the medieval days of the internet, when so-called Internet Explorers roamed the untamed plains of simple websites, developers created the first JavaScript frameworks to improve their crude tools and overcome the wild inconsistencies between browsers. Imagine a time when things like `document.querySelectorAll()` didn't even exist. jQuery was the most popular of these frameworks, granting great power to those who knew how to wield it.
+
+Cheerio was deliberately designed to mimic jQuery's interface. At the time, nearly everyone was familiar with it, and it felt like the most natural way to walk through HTML elements. jQuery was used in the browser, Cheerio in Node.js. But as time passed, jQuery gradually faded from relevance. In a twist of history, we now learn its syntax only to use Cheerio.
+
+:::
+
---
-### Scrape F1 teams
+### Scrape F1 Academy teams
-Print a total count of F1 teams listed on this page:
+Print a total count of F1 Academy teams listed on this page:
```text
-https://www.formula1.com/en/teams
+https://www.f1academy.com/Racing-Series/Teams
```
]
Solution
- ```py
- import httpx
- from bs4 import BeautifulSoup
+ ```js
+ import * as cheerio from 'cheerio';
- url = "https://www.formula1.com/en/teams"
- response = httpx.get(url)
- response.raise_for_status()
+ const url = "https://www.f1academy.com/Racing-Series/Teams";
+ const response = await fetch(url);
- html_code = response.text
- soup = BeautifulSoup(html_code, "html.parser")
- print(len(soup.select(".group")))
+ if (response.ok) {
+ const html = await response.text();
+ const $ = cheerio.load(html);
+ console.log($(".teams-driver-item").length);
+ } else {
+ throw new Error(`HTTP ${response.status}`);
+ }
```
-### Scrape F1 drivers
+### Scrape F1 Academy drivers
-Use the same URL as in the previous exercise, but this time print a total count of F1 drivers.
+Use the same URL as in the previous exercise, but this time print a total count of F1 Academy drivers.
Solution
- ```py
- import httpx
- from bs4 import BeautifulSoup
+ ```js
+ import * as cheerio from 'cheerio';
- url = "https://www.formula1.com/en/teams"
- response = httpx.get(url)
- response.raise_for_status()
+ const url = "https://www.f1academy.com/Racing-Series/Teams";
+ const response = await fetch(url);
- html_code = response.text
- soup = BeautifulSoup(html_code, "html.parser")
- print(len(soup.select(".f1-team-driver-name")))
+ if (response.ok) {
+ const html = await response.text();
+ const $ = cheerio.load(html);
+ console.log($(".driver").length);
+ } else {
+ throw new Error(`HTTP ${response.status}`);
+ }
```
diff --git a/sources/academy/webscraping/scraping_basics_javascript2/index.md b/sources/academy/webscraping/scraping_basics_javascript2/index.md
index c7dcb96b5..3751f05ef 100644
--- a/sources/academy/webscraping/scraping_basics_javascript2/index.md
+++ b/sources/academy/webscraping/scraping_basics_javascript2/index.md
@@ -33,7 +33,7 @@ Anyone with basic knowledge of developing programs in JavaScript who wants to st
## Requirements
- A macOS, Linux, or Windows machine with a web browser and Node.js installed.
-- Familiarity with JavaScript basics: variables, conditions, loops, functions, strings, lists, dictionaries, files, classes, and exceptions.
+- Familiarity with JavaScript basics: variables, conditions, loops, functions, strings, arrays, objects, files, classes, promises, imports, and exceptions.
- Comfort with building a Node.js package and installing dependencies with `npm`.
- Familiarity with running commands in Terminal (macOS/Linux) or Command Prompt (Windows).
diff --git a/sources/academy/webscraping/scraping_basics_python/05_parsing_html.md b/sources/academy/webscraping/scraping_basics_python/05_parsing_html.md
index 8b90a5cf1..74c399b69 100644
--- a/sources/academy/webscraping/scraping_basics_python/05_parsing_html.md
+++ b/sources/academy/webscraping/scraping_basics_python/05_parsing_html.md
@@ -63,7 +63,7 @@ $ python main.py
[Sales
]
```
-Our code lists all `h1` elements it can find on the page. It's the case that there's just one, so in the result we can see a list with a single item. What if we want to print just the text? Let's change the end of the program to the following:
+Our code lists all `h1` elements it can find in the HTML we gave it. It's the case that there's just one, so in the result we can see a list with a single item. What if we want to print just the text? Let's change the end of the program to the following:
```py
headings = soup.select("h1")
@@ -80,7 +80,7 @@ Sales
:::note Dynamic websites
-The Warehouse returns full HTML in its initial response, but many other sites add content via JavaScript after the page loads or after user interaction. In such cases, what we see in DevTools may differ from `response.text` in Python. Learn how to handle these scenarios in our [API Scraping](../api_scraping/index.md) and [Puppeteer & Playwright](../puppeteer_playwright/index.md) courses.
+The Warehouse returns full HTML in its initial response, but many other sites add some content after the page loads or after user interaction. In such cases, what we'd see in DevTools could differ from `response.text` in Python. Learn how to handle these scenarios in our [API Scraping](../api_scraping/index.md) and [Puppeteer & Playwright](../puppeteer_playwright/index.md) courses.
:::
@@ -117,12 +117,12 @@ That's it! We've managed to download a product listing, parse its HTML, and coun
-### Scrape F1 teams
+### Scrape F1 Academy teams
-Print a total count of F1 teams listed on this page:
+Print a total count of F1 Academy teams listed on this page:
```text
-https://www.formula1.com/en/teams
+https://www.f1academy.com/Racing-Series/Teams
```
@@ -132,20 +132,20 @@ https://www.formula1.com/en/teams
import httpx
from bs4 import BeautifulSoup
- url = "https://www.formula1.com/en/teams"
+ url = "https://www.f1academy.com/Racing-Series/Teams"
response = httpx.get(url)
response.raise_for_status()
html_code = response.text
soup = BeautifulSoup(html_code, "html.parser")
- print(len(soup.select(".group")))
+ print(len(soup.select(".teams-driver-item")))
```
-### Scrape F1 drivers
+### Scrape F1 Academy drivers
-Use the same URL as in the previous exercise, but this time print a total count of F1 drivers.
+Use the same URL as in the previous exercise, but this time print a total count of F1 Academy drivers.
Solution
@@ -154,13 +154,13 @@ Use the same URL as in the previous exercise, but this time print a total count
import httpx
from bs4 import BeautifulSoup
- url = "https://www.formula1.com/en/teams"
+ url = "https://www.f1academy.com/Racing-Series/Teams"
response = httpx.get(url)
response.raise_for_status()
html_code = response.text
soup = BeautifulSoup(html_code, "html.parser")
- print(len(soup.select(".f1-team-driver-name")))
+ print(len(soup.select(".driver")))
```