Skip to content

Commit 717e49c

Browse files
committed
feat: add admonition to all old course pages
1 parent 558d7cf commit 717e49c

31 files changed

+112
-6
lines changed

sources/academy/platform/expert_scraping_with_apify/actors_webhooks.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -10,7 +10,7 @@ slug: /expert-scraping-with-apify/actors-webhooks
1010
**Learn more advanced details about Actors, how they work, and the default configurations they can take. Also, learn how to integrate your Actor with webhooks.**
1111

1212
:::caution Updates coming
13-
This lesson is subject to change because it currently relies on code from our archived **Web scraping basics for JavaScript devs** course. For now you can still access the archived course, but we plan to close it in a few months. This lesson will be updated to remove the dependency.
13+
This lesson is subject to change because it currently relies on code from our archived **Web scraping basics for JavaScript devs** course. For now you can still access the archived course, but we plan to completely retire it in a few months. This lesson will be updated to remove the dependency.
1414
:::
1515

1616
---

sources/academy/platform/expert_scraping_with_apify/solutions/integrating_webhooks.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -10,7 +10,7 @@ slug: /expert-scraping-with-apify/solutions/integrating-webhooks
1010
**Learn how to integrate webhooks into your Actors. Webhooks are a super powerful tool, and can be used to do almost anything!**
1111

1212
:::caution Updates coming
13-
This lesson is subject to change because it currently relies on code from our archived **Web scraping basics for JavaScript devs** course. For now you can still access the archived course, but we plan to close it in a few months. This lesson will be updated to remove the dependency.
13+
This lesson is subject to change because it currently relies on code from our archived **Web scraping basics for JavaScript devs** course. For now you can still access the archived course, but we plan to completely retire it in a few months. This lesson will be updated to remove the dependency.
1414
:::
1515

1616
---

sources/academy/webscraping/anti_scraping/mitigation/using_proxies.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -10,7 +10,7 @@ slug: /anti-scraping/mitigation/using-proxies
1010
**Learn how to use and automagically rotate proxies in your scrapers by using Crawlee, and a bit about how to obtain pools of proxies.**
1111

1212
:::caution Updates coming
13-
This lesson is subject to change because it currently relies on code from our archived **Web scraping basics for JavaScript devs** course. For now you can still access the archived course, but we plan to close it in a few months. This lesson will be updated to remove the dependency.
13+
This lesson is subject to change because it currently relies on code from our archived **Web scraping basics for JavaScript devs** course. For now you can still access the archived course, but we plan to completely retire it in a few months. This lesson will be updated to remove the dependency.
1414
:::
1515

1616
---
Lines changed: 5 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,5 @@
1+
:::caution Archived course
2+
3+
This is an archive of our old course. Check out our new [Web scraping basics for JavaScript devs](/academy/scraping-basics-javascript) course instead! We plan to completely retire this old course in a few months.
4+
5+
:::

sources/academy/webscraping/scraping_basics_legacy/best_practices.md

Lines changed: 4 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -6,10 +6,14 @@ slug: /scraping-basics-javascript/legacy/best-practices
66
noindex: true
77
---
88

9+
import LegacyAdmonition from '../scraping_basics/_legacy.mdx';
10+
911
# Best practices when writing scrapers {#best-practices}
1012

1113
**Understand the standards and best practices that we here at Apify abide by to write readable, scalable, and maintainable code.**
1214

15+
<LegacyAdmonition />
16+
1317
---
1418

1519
Every developer has their own style, which evolves as they grow and learn. While one dev might prefer a more [functional](https://en.wikipedia.org/wiki/Functional_programming) style, another might find an [imperative](https://en.wikipedia.org/wiki/Imperative_programming) approach to be more intuitive. We at Apify understand this, and have written this best practices lesson with that in mind.

sources/academy/webscraping/scraping_basics_legacy/challenge/index.md

Lines changed: 4 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -6,10 +6,14 @@ slug: /scraping-basics-javascript/legacy/challenge
66
noindex: true
77
---
88

9+
import LegacyAdmonition from '../../scraping_basics/_legacy.mdx';
10+
911
# Challenge
1012

1113
**Test your knowledge acquired in the previous sections of this course by building an Amazon scraper using Crawlee's CheerioCrawler!**
1214

15+
<LegacyAdmonition />
16+
1317
---
1418

1519
Before moving onto the other courses in the academy, we recommend following along with this section, as it combines everything you've learned in the previous lessons into one cohesive project that helps you prove to yourself that you've thoroughly understood the material.

sources/academy/webscraping/scraping_basics_legacy/challenge/initializing_and_setting_up.md

Lines changed: 4 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -6,10 +6,14 @@ slug: /scraping-basics-javascript/legacy/challenge/initializing-and-setting-up
66
noindex: true
77
---
88

9+
import LegacyAdmonition from '../../scraping_basics/_legacy.mdx';
10+
911
# Initialization & setting up
1012

1113
**When you extract links from a web page, you often end up with a lot of irrelevant URLs. Learn how to filter the links to only keep the ones you need.**
1214

15+
<LegacyAdmonition />
16+
1317
---
1418

1519
The Crawlee CLI speeds up the process of setting up a Crawlee project. Navigate to the directory you'd like your project's folder to live, then open up a terminal instance and run the following command:

sources/academy/webscraping/scraping_basics_legacy/challenge/modularity.md

Lines changed: 4 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -6,10 +6,14 @@ slug: /scraping-basics-javascript/legacy/challenge/modularity
66
noindex: true
77
---
88

9+
import LegacyAdmonition from '../../scraping_basics/_legacy.mdx';
10+
911
# Modularity
1012

1113
**Before you build your first web scraper with Crawlee, it is important to understand the concept of modularity in programming.**
1214

15+
<LegacyAdmonition />
16+
1317
---
1418

1519
Now that we've gotten our first request going, the first challenge is going to be selecting all of the resulting products on the page. Back in the browser, we'll use the DevTools hover tool to inspect a product.

sources/academy/webscraping/scraping_basics_legacy/challenge/scraping_amazon.md

Lines changed: 4 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -6,10 +6,14 @@ slug: /scraping-basics-javascript/legacy/challenge/scraping-amazon
66
noindex: true
77
---
88

9+
import LegacyAdmonition from '../../scraping_basics/_legacy.mdx';
10+
911
# Scraping Amazon
1012

1113
**Build your first web scraper with Crawlee. Let's extract product information from Amazon to give you an idea of what real-world scraping looks like.**
1214

15+
<LegacyAdmonition />
16+
1317
---
1418

1519
In our quick chat about modularity, we finished the code for the results page and added a request for each product to the crawler's **RequestQueue**. Here, we need to scrape the description, so it shouldn't be too hard:

sources/academy/webscraping/scraping_basics_legacy/crawling/exporting_data.md

Lines changed: 4 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -6,10 +6,14 @@ slug: /scraping-basics-javascript/legacy/crawling/exporting-data
66
noindex: true
77
---
88

9+
import LegacyAdmonition from '../../scraping_basics/_legacy.mdx';
10+
911
# Exporting data {#exporting-data}
1012

1113
**Learn how to export the data you scraped using Crawlee to CSV or JSON.**
1214

15+
<LegacyAdmonition />
16+
1317
---
1418

1519
In the previous lessons, you learned that:

0 commit comments

Comments
 (0)