Skip to content

Commit 4c7811e

Browse files
authored
docs: vale fixes (#1433)
1 parent 3f1b899 commit 4c7811e

File tree

7 files changed

+25
-10
lines changed

7 files changed

+25
-10
lines changed

.github/styles/config/vocabularies/Docs/accept.txt

Lines changed: 17 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,5 @@
1-
Apify(?=-\w+)
1+
\bApify\b(?:-\w+)?
22
@apify\.com
3-
\bApify\b
43
Actor(s)?
54
SDK(s)
65
[Ss]torages
@@ -93,3 +92,19 @@ preconfigured
9392
asyncio
9493
parallelization
9594
IMDb
95+
iPhone
96+
iPhones
97+
iPad
98+
iPads
99+
screenshotting
100+
Fakestore
101+
SKUs
102+
SKU
103+
Shopify
104+
learnings
105+
subwoofer
106+
captcha
107+
captchas
108+
deduplicating
109+
110+
jQuery

.vale.ini

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -11,9 +11,9 @@ mdx = md
1111

1212
[*.md]
1313
BasedOnStyles = Vale, Apify, write-good, Microsoft
14-
# Ignore URLs, HTML/XML tags starting with capital letter, lines containing = sign, http & https URL ending with ] or ), email addresses, inline code
15-
TokenIgnores = (<\/?[A-Z].+>), ([^\n]+=[^\n]*), (\[[^\]]+\]\([^\)]+\)), ([^\n]+@[^\n]+\.[^\n]), ({[^}]*}), `[^`]+`
16-
# Ignore HTML comments and code blocks
14+
# Ignore URLs, HTML/XML tags, lines with =, Markdown links, emails, curly braces, inline code
15+
TokenIgnores = (<\/?[A-Z][^>]*>), ([^\n]+=[^\n]*), (\[[^\]]+\]\([^\)]+\)), ([^\n]+@[^\n]+\.[^\n]), ({[^}]*}), `[^`]+`
16+
# Ignore HTML comments and Markdown code blocks
1717
BlockIgnores = (?s) (<!--.*?-->)|(```.*?```)
1818
Vale.Spelling = YES
1919

sources/academy/platform/expert_scraping_with_apify/apify_api_and_client.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -62,4 +62,4 @@ The new Actor should take the following input values, which be mapped to paramet
6262

6363
## Next up {#next}
6464

65-
[Lesson VI](./migrations_maintaining_state.md) will teach us everything we need to know about migrations and how to handle them properly to avoid losing any state; therefore, increasing the reliability of our **demo-actor** Amazon scraper.
65+
[Lesson VI](./migrations_maintaining_state.md) will teach us everything we need to know about migrations and how to handle them properly to avoid losing any state; therefore, increasing the reliability of our `demo-actor` Amazon scraper.

sources/academy/tutorials/node_js/handle_blocked_requests_puppeteer.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -88,4 +88,4 @@ Apify.main(async () => {
8888
});
8989
```
9090

91-
Now we have a crawler that catches the most common blocking issues on Google. In gotoFunction we will catch if the page doesn't load and in the handlePageFunction we check if we were redirected to the 'sorry page'. In both cases we throw an error afterwards so the request is added back to the crawling queue (otherwise the crawler would think everything was okay and would treat that request as handled).
91+
Now we have a crawler that catches the most common blocking issues on Google. In `gotoFunction` we will catch if the page doesn't load and in the handlePageFunction we check if we were redirected to the 'sorry page'. In both cases we throw an error afterwards so the request is added back to the crawling queue (otherwise the crawler would think everything was okay and would treat that request as handled).

sources/academy/tutorials/node_js/js_in_html.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -54,4 +54,4 @@ const data = await page.evaluate(() => window.__sc_hydration);
5454
console.log(data);
5555
```
5656

57-
Which of these methods you use totally depends on the type of crawler you are using. Grabbing the data directly from the `window` object within the context of the browser using Puppeteer is of course the most reliable solution; however, it is less performant than making a static HTTP request and parsing the object directly from the downloaded HTML.
57+
Which of these methods you use totally depends on the type of crawler you are using. Grabbing the data directly from the `window` object within the context of the browser using Puppeteer is of course the most reliable solution; however, it is less efficient than making a static HTTP request and parsing the object directly from the downloaded HTML.

sources/academy/tutorials/node_js/processing_multiple_pages_web_scraper.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -15,7 +15,7 @@ Let's illustrate a solution to this problem by creating a scraper which starts w
1515

1616
> Solving a common problem with scraper automatically deduplicating the same URLs.
1717
18-
First, we need to start the scraper on the page from which we're going to do our enqueuing. To do that, we create one start URL with the label "enqueue" and URL "https://example.com/". Now we can proceed to enqueue all the pages. The first part of our pageFunction will look like this:
18+
First, we need to start the scraper on the page from which we're going to do our enqueuing. To do that, we create one start URL with the label "enqueue" and URL "https://example.com/". Now we can proceed to enqueue all the pages. The first part of our `pageFunction` will look like this:
1919

2020
```js
2121
async function pageFunction(context) {

sources/academy/webscraping/api_scraping/index.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -69,7 +69,7 @@ Especially for [dynamic sites](https://blog.apify.com/what-is-a-dynamic-page/),
6969

7070
Depending on the website, sending large amounts of requests to their pages could result in a slight performance decrease on their end. By using their API instead, not only does your scraper run better, but it is less demanding of the target website.
7171

72-
## Disdvantages of API Scraping {#disadvantages}
72+
## Disadvantages of API Scraping {#disadvantages}
7373

7474
<br/>
7575

0 commit comments

Comments
 (0)