Skip to content

Commit a24ff8e

Browse files
committed
style: make Vale happy about H1s
1 parent 1b526af commit a24ff8e

20 files changed

+6
-44
lines changed

sources/academy/webscraping/scraping_basics_legacy/best_practices.md

Lines changed: 1 addition & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,5 @@
11
---
2-
title: Best practices
2+
title: Best practices when writing scrapers
33
description: Understand the standards and best practices that we here at Apify abide by to write readable, scalable, and maintainable code.
44
sidebar_position: 1.5
55
slug: /scraping-basics-javascript/legacy/best-practices
@@ -8,8 +8,6 @@ noindex: true
88

99
import LegacyAdmonition from '../scraping_basics/_legacy.mdx';
1010

11-
# Best practices when writing scrapers {#best-practices}
12-
1311
**Understand the standards and best practices that we here at Apify abide by to write readable, scalable, and maintainable code.**
1412

1513
<LegacyAdmonition />

sources/academy/webscraping/scraping_basics_legacy/challenge/index.md

Lines changed: 0 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -8,8 +8,6 @@ noindex: true
88

99
import LegacyAdmonition from '../../scraping_basics/_legacy.mdx';
1010

11-
# Challenge
12-
1311
**Test your knowledge acquired in the previous sections of this course by building an Amazon scraper using Crawlee's CheerioCrawler!**
1412

1513
<LegacyAdmonition />

sources/academy/webscraping/scraping_basics_legacy/challenge/initializing_and_setting_up.md

Lines changed: 0 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -8,8 +8,6 @@ noindex: true
88

99
import LegacyAdmonition from '../../scraping_basics/_legacy.mdx';
1010

11-
# Initialization & setting up
12-
1311
**When you extract links from a web page, you often end up with a lot of irrelevant URLs. Learn how to filter the links to only keep the ones you need.**
1412

1513
<LegacyAdmonition />

sources/academy/webscraping/scraping_basics_legacy/challenge/modularity.md

Lines changed: 0 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -8,8 +8,6 @@ noindex: true
88

99
import LegacyAdmonition from '../../scraping_basics/_legacy.mdx';
1010

11-
# Modularity
12-
1311
**Before you build your first web scraper with Crawlee, it is important to understand the concept of modularity in programming.**
1412

1513
<LegacyAdmonition />

sources/academy/webscraping/scraping_basics_legacy/challenge/scraping_amazon.md

Lines changed: 0 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -8,8 +8,6 @@ noindex: true
88

99
import LegacyAdmonition from '../../scraping_basics/_legacy.mdx';
1010

11-
# Scraping Amazon
12-
1311
**Build your first web scraper with Crawlee. Let's extract product information from Amazon to give you an idea of what real-world scraping looks like.**
1412

1513
<LegacyAdmonition />

sources/academy/webscraping/scraping_basics_legacy/crawling/exporting_data.md

Lines changed: 0 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -8,8 +8,6 @@ noindex: true
88

99
import LegacyAdmonition from '../../scraping_basics/_legacy.mdx';
1010

11-
# Exporting data {#exporting-data}
12-
1311
**Learn how to export the data you scraped using Crawlee to CSV or JSON.**
1412

1513
<LegacyAdmonition />

sources/academy/webscraping/scraping_basics_legacy/crawling/filtering_links.md

Lines changed: 0 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -10,8 +10,6 @@ import Tabs from '@theme/Tabs';
1010
import TabItem from '@theme/TabItem';
1111
import LegacyAdmonition from '../../scraping_basics/_legacy.mdx';
1212

13-
# Filtering links {#filtering-links}
14-
1513
**When you extract links from a web page, you often end up with a lot of irrelevant URLs. Learn how to filter the links to only keep the ones you need.**
1614

1715
<LegacyAdmonition />

sources/academy/webscraping/scraping_basics_legacy/crawling/first_crawl.md

Lines changed: 0 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -8,8 +8,6 @@ noindex: true
88

99
import LegacyAdmonition from '../../scraping_basics/_legacy.mdx';
1010

11-
# Your first crawl {#your-first-crawl}
12-
1311
**Learn how to crawl the web using Node.js, Cheerio and an HTTP client. Extract URLs from pages and use them to visit more websites.**
1412

1513
<LegacyAdmonition />

sources/academy/webscraping/scraping_basics_legacy/crawling/headless_browser.md

Lines changed: 0 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -10,8 +10,6 @@ import Tabs from '@theme/Tabs';
1010
import TabItem from '@theme/TabItem';
1111
import LegacyAdmonition from '../../scraping_basics/_legacy.mdx';
1212

13-
# Headless browsers {#headless-browser}
14-
1513
**Learn how to scrape the web with a headless browser using only a few lines of code. Chrome, Firefox, Safari, Edge - all are supported.**
1614

1715
<LegacyAdmonition />

sources/academy/webscraping/scraping_basics_legacy/crawling/pro_scraping.md

Lines changed: 1 addition & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -8,8 +8,6 @@ noindex: true
88

99
import LegacyAdmonition from '../../scraping_basics/_legacy.mdx';
1010

11-
# Professional scraping 👷 {#pro-scraping}
12-
1311
**Learn how to build scrapers quicker and get better and more robust results by using Crawlee, an open-source library for scraping in Node.js.**
1412

1513
<LegacyAdmonition />
@@ -58,7 +56,7 @@ To use Crawlee, we have to install it from npm. Let's add it to our project from
5856
npm install crawlee
5957
```
6058

61-
After the installation completes, create a new file called **crawlee.js** and add the following code to it:
59+
After the installation completes, create a new file called `crawlee.js` and add the following code to it:
6260

6361
```js title=crawlee.js
6462
import { CheerioCrawler } from 'crawlee';

0 commit comments

Comments
 (0)