Skip to content

Commit d3eb528

Browse files
committed
style: do not use three dots in incomplete examples
1 parent 30d0148 commit d3eb528

File tree

4 files changed

+0
-12
lines changed

4 files changed

+0
-12
lines changed

sources/academy/webscraping/scraping_basics_javascript2/11_scraping_variants.md

Lines changed: 0 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -72,8 +72,6 @@ These elements aren't visible to regular visitors. They're there just in case Ja
7272
Using our knowledge of Beautiful Soup, we can locate the options and extract the data we need:
7373

7474
```py
75-
...
76-
7775
listing_url = "https://warehouse-theme-metal.myshopify.com/collections/sales"
7876
listing_soup = download(listing_url)
7977

@@ -89,8 +87,6 @@ for product in listing_soup.select(".product-item"):
8987
else:
9088
item["variant_name"] = None
9189
data.append(item)
92-
93-
...
9490
```
9591

9692
The CSS selector `.product-form__option.no-js` matches elements with both `product-form__option` and `no-js` classes. Then we're using the [descendant combinator](https://developer.mozilla.org/en-US/docs/Web/CSS/Descendant_combinator) to match all `option` elements somewhere inside the `.product-form__option.no-js` wrapper.

sources/academy/webscraping/scraping_basics_javascript2/12_framework.md

Lines changed: 0 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -534,7 +534,6 @@ If you export the dataset as JSON, it should look something like this:
534534
To scrape IMDb data, you'll need to construct a `Request` object with the appropriate search URL for each movie title. The following code snippet gives you an idea of how to do this:
535535

536536
```py
537-
...
538537
from urllib.parse import quote_plus
539538

540539
async def main():
@@ -550,7 +549,6 @@ async def main():
550549
await context.add_requests(requests)
551550

552551
...
553-
...
554552
```
555553

556554
When navigating to the first search result, you might find it helpful to know that `context.enqueue_links()` accepts a `limit` keyword argument, letting you specify the max number of HTTP requests to enqueue.

sources/academy/webscraping/scraping_basics_python/11_scraping_variants.md

Lines changed: 0 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -71,8 +71,6 @@ These elements aren't visible to regular visitors. They're there just in case Ja
7171
Using our knowledge of Beautiful Soup, we can locate the options and extract the data we need:
7272

7373
```py
74-
...
75-
7674
listing_url = "https://warehouse-theme-metal.myshopify.com/collections/sales"
7775
listing_soup = download(listing_url)
7876

@@ -88,8 +86,6 @@ for product in listing_soup.select(".product-item"):
8886
else:
8987
item["variant_name"] = None
9088
data.append(item)
91-
92-
...
9389
```
9490

9591
The CSS selector `.product-form__option.no-js` matches elements with both `product-form__option` and `no-js` classes. Then we're using the [descendant combinator](https://developer.mozilla.org/en-US/docs/Web/CSS/Descendant_combinator) to match all `option` elements somewhere inside the `.product-form__option.no-js` wrapper.

sources/academy/webscraping/scraping_basics_python/12_framework.md

Lines changed: 0 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -533,7 +533,6 @@ If you export the dataset as JSON, it should look something like this:
533533
To scrape IMDb data, you'll need to construct a `Request` object with the appropriate search URL for each movie title. The following code snippet gives you an idea of how to do this:
534534

535535
```py
536-
...
537536
from urllib.parse import quote_plus
538537

539538
async def main():
@@ -549,7 +548,6 @@ async def main():
549548
await context.add_requests(requests)
550549

551550
...
552-
...
553551
```
554552

555553
When navigating to the first search result, you might find it helpful to know that `context.enqueue_links()` accepts a `limit` keyword argument, letting you specify the max number of HTTP requests to enqueue.

0 commit comments

Comments
 (0)