Skip to content

Commit 4494042

Browse files
committed
feat: decide about the technologies
1 parent 5ddc392 commit 4494042

File tree

7 files changed

+15
-15
lines changed

7 files changed

+15
-15
lines changed

sources/academy/webscraping/scraping_basics_javascript2/04_downloading_html.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1,13 +1,13 @@
11
---
22
title: Downloading HTML with Node.js
33
sidebar_label: Downloading HTML
4-
description: Lesson about building a Node.js application for watching prices. Using the /TBD/ library to download HTML code of a product listing page.
4+
description: Lesson about building a Node.js application for watching prices. Using the Fetch API to download HTML code of a product listing page.
55
slug: /scraping-basics-javascript2/downloading-html
66
---
77

88
import Exercises from './_exercises.mdx';
99

10-
**In this lesson we'll start building a Node.js application for watching prices. As a first step, we'll use the /TBD/ library to download HTML code of a product listing page.**
10+
**In this lesson we'll start building a Node.js application for watching prices. As a first step, we'll use the Fetch API to download HTML code of a product listing page.**
1111

1212
---
1313

sources/academy/webscraping/scraping_basics_javascript2/05_parsing_html.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1,13 +1,13 @@
11
---
22
title: Parsing HTML with Node.js
33
sidebar_label: Parsing HTML
4-
description: Lesson about building a Node.js application for watching prices. Using the /TBD/ library to parse HTML code of a product listing page.
4+
description: Lesson about building a Node.js application for watching prices. Using the Cheerio library to parse HTML code of a product listing page.
55
slug: /scraping-basics-javascript2/parsing-html
66
---
77

88
import Exercises from './_exercises.mdx';
99

10-
**In this lesson we'll look for products in the downloaded HTML. We'll use /TBD/ to turn the HTML into objects which we can work with in our Node.js program.**
10+
**In this lesson we'll look for products in the downloaded HTML. We'll use Cheerio to turn the HTML into objects which we can work with in our Node.js program.**
1111

1212
---
1313

sources/academy/webscraping/scraping_basics_javascript2/06_locating_elements.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1,13 +1,13 @@
11
---
22
title: Locating HTML elements with Node.js
33
sidebar_label: Locating HTML elements
4-
description: Lesson about building a Node.js application for watching prices. Using the /TBD/ library to locate products on the product listing page.
4+
description: Lesson about building a Node.js application for watching prices. Using the Cheerio library to locate products on the product listing page.
55
slug: /scraping-basics-javascript2/locating-elements
66
---
77

88
import Exercises from './_exercises.mdx';
99

10-
**In this lesson we'll locate product data in the downloaded HTML. We'll use /TBD/ to find those HTML elements which contain details about each product, such as title or price.**
10+
**In this lesson we'll locate product data in the downloaded HTML. We'll use Cheerio to find those HTML elements which contain details about each product, such as title or price.**
1111

1212
---
1313

sources/academy/webscraping/scraping_basics_javascript2/08_saving_data.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1,11 +1,11 @@
11
---
22
title: Saving data with Node.js
33
sidebar_label: Saving data
4-
description: Lesson about building a Node.js application for watching prices. Using /TBD/ to save data scraped from product listing pages in popular formats such as CSV or JSON.
4+
description: Lesson about building a Node.js application for watching prices. Using the json2csv library to save data scraped from product listing pages in both JSON and CSV.
55
slug: /scraping-basics-javascript2/saving-data
66
---
77

8-
**In this lesson, we'll save the data we scraped in the popular formats, such as CSV or JSON. We'll use /TBD/ to export the files.**
8+
**In this lesson, we'll save the data we scraped in the popular formats, such as CSV or JSON. We'll use the json2csv library to export the files.**
99

1010
---
1111

sources/academy/webscraping/scraping_basics_javascript2/09_getting_links.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1,13 +1,13 @@
11
---
22
title: Getting links from HTML with Node.js
33
sidebar_label: Getting links from HTML
4-
description: Lesson about building a Node.js application for watching prices. Using the /TBD/ library to locate links to individual product pages.
4+
description: Lesson about building a Node.js application for watching prices. Using the Cheerio library to locate links to individual product pages.
55
slug: /scraping-basics-javascript2/getting-links
66
---
77

88
import Exercises from './_exercises.mdx';
99

10-
**In this lesson, we'll locate and extract links to individual product pages. We'll use /TBD/ to find the relevant bits of HTML.**
10+
**In this lesson, we'll locate and extract links to individual product pages. We'll use Cheerio to find the relevant bits of HTML.**
1111

1212
---
1313

sources/academy/webscraping/scraping_basics_javascript2/10_crawling.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1,13 +1,13 @@
11
---
22
title: Crawling websites with Node.js
33
sidebar_label: Crawling websites
4-
description: Lesson about building a Node.js application for watching prices. Using the /TBD/ library to follow links to individual product pages.
4+
description: Lesson about building a Node.js application for watching prices. Using the Fetch API to follow links to individual product pages.
55
slug: /scraping-basics-javascript2/crawling
66
---
77

88
import Exercises from './_exercises.mdx';
99

10-
**In this lesson, we'll follow links to individual product pages. We'll use /TBD/ to download them and /TBD/ to process them.**
10+
**In this lesson, we'll follow links to individual product pages. We'll use the Fetch API to download them and Cheerio to process them.**
1111

1212
---
1313

sources/academy/webscraping/scraping_basics_javascript2/index.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -19,9 +19,9 @@ In this course we'll use JavaScript to create an application for watching prices
1919
## What we'll do
2020

2121
- Inspect pages using browser DevTools.
22-
- Download web pages using the /TBD/ library.
23-
- Extract data from web pages using the /TBD/ library.
24-
- Save extracted data in various formats, e.g. CSV which MS Excel or Google Sheets can open.
22+
- Download web pages using the Fetch API.
23+
- Extract data from web pages using the Cheerio library.
24+
- Save extracted data in various formats (e.g. CSV which MS Excel or Google Sheets can open) using the json2csv library.
2525
- Follow links programmatically (crawling).
2626
- Save time and effort with frameworks, such as Crawlee, and scraping platforms, such as Apify.
2727

0 commit comments

Comments
 (0)