Skip to content

Commit ef7ddef

Browse files
honzajavorekdaveomri
authored andcommitted
style: do not use three dots in incomplete examples (apify#1840)
There are many places with incomplete code examples, but nowhere I used these three dots at the start and end of the code block, except of these two (four) occurrences. This PR removes them to achieve consistency with the rest of examples in the course(s).
1 parent 34624f2 commit ef7ddef

File tree

4 files changed

+0
-12
lines changed

4 files changed

+0
-12
lines changed

sources/academy/webscraping/scraping_basics_javascript2/11_scraping_variants.md

Lines changed: 0 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -72,8 +72,6 @@ These elements aren't visible to regular visitors. They're there just in case Ja
7272
Using our knowledge of Beautiful Soup, we can locate the options and extract the data we need:
7373

7474
```py
75-
...
76-
7775
listing_url = "https://warehouse-theme-metal.myshopify.com/collections/sales"
7876
listing_soup = download(listing_url)
7977

@@ -89,8 +87,6 @@ for product in listing_soup.select(".product-item"):
8987
else:
9088
item["variant_name"] = None
9189
data.append(item)
92-
93-
...
9490
```
9591

9692
The CSS selector `.product-form__option.no-js` matches elements with both `product-form__option` and `no-js` classes. Then we're using the [descendant combinator](https://developer.mozilla.org/en-US/docs/Web/CSS/Descendant_combinator) to match all `option` elements somewhere inside the `.product-form__option.no-js` wrapper.

sources/academy/webscraping/scraping_basics_javascript2/12_framework.md

Lines changed: 0 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -534,7 +534,6 @@ If you export the dataset as JSON, it should look something like this:
534534
To scrape IMDb data, you'll need to construct a `Request` object with the appropriate search URL for each movie title. The following code snippet gives you an idea of how to do this:
535535

536536
```py
537-
...
538537
from urllib.parse import quote_plus
539538

540539
async def main():
@@ -550,7 +549,6 @@ async def main():
550549
await context.add_requests(requests)
551550

552551
...
553-
...
554552
```
555553

556554
When navigating to the first search result, you might find it helpful to know that `context.enqueue_links()` accepts a `limit` keyword argument, letting you specify the max number of HTTP requests to enqueue.

sources/academy/webscraping/scraping_basics_python/11_scraping_variants.md

Lines changed: 0 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -71,8 +71,6 @@ These elements aren't visible to regular visitors. They're there just in case Ja
7171
Using our knowledge of Beautiful Soup, we can locate the options and extract the data we need:
7272

7373
```py
74-
...
75-
7674
listing_url = "https://warehouse-theme-metal.myshopify.com/collections/sales"
7775
listing_soup = download(listing_url)
7876

@@ -88,8 +86,6 @@ for product in listing_soup.select(".product-item"):
8886
else:
8987
item["variant_name"] = None
9088
data.append(item)
91-
92-
...
9389
```
9490

9591
The CSS selector `.product-form__option.no-js` matches elements with both `product-form__option` and `no-js` classes. Then we're using the [descendant combinator](https://developer.mozilla.org/en-US/docs/Web/CSS/Descendant_combinator) to match all `option` elements somewhere inside the `.product-form__option.no-js` wrapper.

sources/academy/webscraping/scraping_basics_python/12_framework.md

Lines changed: 0 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -536,7 +536,6 @@ If you export the dataset as JSON, it should look something like this:
536536
To scrape IMDb data, you'll need to construct a `Request` object with the appropriate search URL for each movie title. The following code snippet gives you an idea of how to do this:
537537

538538
```py
539-
...
540539
from urllib.parse import quote_plus
541540

542541
async def main():
@@ -552,7 +551,6 @@ async def main():
552551
await context.add_requests(requests)
553552

554553
...
555-
...
556554
```
557555

558556
When navigating to the first search result, you might find it helpful to know that `context.enqueue_links()` accepts a `limit` keyword argument, letting you specify the max number of HTTP requests to enqueue.

0 commit comments

Comments
 (0)