Skip to content

Commit 8689501

Browse files
authored
style: use Markdown's automatic numbering (#1881)
Let the machine do the counting! <img width="1658" height="968" alt="Screenshot 2025-09-04 at 10-40-49 machine counting at DuckDuckGo" src="https://github.com/user-attachments/assets/10377d28-77fa-459c-8505-8280b42bd34b" />
1 parent 52f80a3 commit 8689501

File tree

3 files changed

+10
-10
lines changed

3 files changed

+10
-10
lines changed

sources/academy/webscraping/scraping_basics_javascript2/08_saving_data.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -229,8 +229,8 @@ Open the `products.csv` file we created in the lesson using a spreadsheet applic
229229
Let's use [Google Sheets](https://www.google.com/sheets/about/), which is free to use. After logging in with a Google account:
230230

231231
1. Go to **File > Import**, choose **Upload**, and select the file. Import the data using the default settings. You should see a table with all the data.
232-
2. Select the header row. Go to **Data > Create filter**.
233-
3. Use the filter icon that appears next to `minPrice`. Choose **Filter by condition**, select **Greater than**, and enter **500** in the text field. Confirm the dialog. You should see only the filtered data.
232+
1. Select the header row. Go to **Data > Create filter**.
233+
1. Use the filter icon that appears next to `minPrice`. Choose **Filter by condition**, select **Greater than**, and enter **500** in the text field. Confirm the dialog. You should see only the filtered data.
234234

235235
![CSV in Google Sheets](images/csv-sheets.png)
236236

sources/academy/webscraping/scraping_basics_python/08_saving_data.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -215,8 +215,8 @@ Open the `products.csv` file we created in the lesson using a spreadsheet applic
215215
Let's use [Google Sheets](https://www.google.com/sheets/about/), which is free to use. After logging in with a Google account:
216216

217217
1. Go to **File > Import**, choose **Upload**, and select the file. Import the data using the default settings. You should see a table with all the data.
218-
2. Select the header row. Go to **Data > Create filter**.
219-
3. Use the filter icon that appears next to `min_price`. Choose **Filter by condition**, select **Greater than**, and enter **500** in the text field. Confirm the dialog. You should see only the filtered data.
218+
1. Select the header row. Go to **Data > Create filter**.
219+
1. Use the filter icon that appears next to `min_price`. Choose **Filter by condition**, select **Greater than**, and enter **500** in the text field. Confirm the dialog. You should see only the filtered data.
220220

221221
![CSV in Google Sheets](images/csv-sheets.png)
222222

sources/academy/webscraping/scraping_basics_python/12_framework.md

Lines changed: 6 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -66,12 +66,12 @@ if __name__ == '__main__':
6666
In the code, we do the following:
6767

6868
1. We import the necessary modules and define an asynchronous `main()` function.
69-
2. Inside `main()`, we first create a crawler object, which manages the scraping process. In this case, it's a crawler based on Beautiful Soup.
70-
3. Next, we define a nested asynchronous function called `handle_listing()`. It receives a `context` parameter, and Python type hints show it's of type `BeautifulSoupCrawlingContext`. Type hints help editors suggest what we can do with the object.
71-
4. We use a Python decorator (the line starting with `@`) to register `handle_listing()` as the _default handler_ for processing HTTP responses.
72-
5. Inside the handler, we extract the page title from the `soup` object and print its text without whitespace.
73-
6. At the end of the function, we run the crawler on a product listing URL and await its completion.
74-
7. The last two lines ensure that if the file is executed directly, Python will properly run the `main()` function using its asynchronous event loop.
69+
1. Inside `main()`, we first create a crawler object, which manages the scraping process. In this case, it's a crawler based on Beautiful Soup.
70+
1. Next, we define a nested asynchronous function called `handle_listing()`. It receives a `context` parameter, and Python type hints show it's of type `BeautifulSoupCrawlingContext`. Type hints help editors suggest what we can do with the object.
71+
1. We use a Python decorator (the line starting with `@`) to register `handle_listing()` as the _default handler_ for processing HTTP responses.
72+
1. Inside the handler, we extract the page title from the `soup` object and print its text without whitespace.
73+
1. At the end of the function, we run the crawler on a product listing URL and await its completion.
74+
1. The last two lines ensure that if the file is executed directly, Python will properly run the `main()` function using its asynchronous event loop.
7575

7676
Don't worry if some of this is new. We don't need to know exactly how [`asyncio`](https://docs.python.org/3/library/asyncio.html), decorators, or type hints work. Let's stick to the practical side and observe what the program does when executed:
7777

0 commit comments

Comments
 (0)