Skip to content

Commit 3d6ce2a

Browse files
committed
fix: change image paths from png to webp
1 parent 3a10fe1 commit 3d6ce2a

File tree

1 file changed

+5
-5
lines changed

1 file changed

+5
-5
lines changed

sources/academy/webscraping/scraping_basics_python/13_platform.md

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -85,7 +85,7 @@ The file contains a single asynchronous function, `main()`. At the beginning, it
8585

8686
Every program that runs on the Apify platform first needs to be packaged as a so-called [Actor](https://apify.com/actors)—a standardized container with designated places for input and output. Crawlee scrapers automatically connect their default dataset to the Actor output, but input must be handled explicitly in the code.
8787

88-
![The expected file structure](./images/actor-file-structure.png)
88+
![The expected file structure](./images/actor-file-structure.webp)
8989

9090
We'll now adjust the template so that it runs our program for watching prices. As the first step, we'll create a new empty file, `crawler.py`, inside the `warehouse-watchdog/src` directory. Then, we'll fill this file with final, unchanged code from the previous lesson:
9191

@@ -255,11 +255,11 @@ Actor build detail https://console.apify.com/actors/a123bCDefghiJkLMN#/builds/0.
255255

256256
After opening the link in our browser, assuming we're logged in, we'll see the **Source** screen on the Actor's detail page. We'll go to the **Input** tab of that screen. We won't change anything—just hit **Start**, and we should see logs similar to what we see locally, but this time our scraper will be running in the cloud.
257257

258-
![Actor's detail page, screen Source, tab Input](./images/actor-input.png)
258+
![Actor's detail page, screen Source, tab Input](./images/actor-input.webp)
259259

260260
When the run finishes, the interface will turn green. On the **Output** tab, we can preview the results as a table or JSON. We can even export the data to formats like CSV, XML, Excel, RSS, and more.
261261

262-
![Actor's detail page, screen Source, tab Output](./images/actor-output.png)
262+
![Actor's detail page, screen Source, tab Output](./images/actor-output.webp)
263263

264264
:::info Accessing data programmatically
265265

@@ -273,7 +273,7 @@ Now that our scraper is deployed, let's automate its execution. In the Apify web
273273

274274
From now on, the Actor will execute daily. We can inspect each run, view logs, check collected data, [monitor stats and charts](https://docs.apify.com/platform/monitoring), and even set up alerts.
275275

276-
![Schedule detail page](./images/actor-schedule.png)
276+
![Schedule detail page](./images/actor-schedule.webp)
277277

278278
## Adding support for proxies
279279

@@ -389,7 +389,7 @@ Run: Building Actor warehouse-watchdog
389389

390390
Back in the Apify console, go to the **Source** screen and switch to the **Input** tab. You'll see the new **Proxy config** option, which defaults to **Datacenter - Automatic**.
391391

392-
![Actor's detail page, screen Source, tab Input with proxies](./images/actor-input-proxies.png)
392+
![Actor's detail page, screen Source, tab Input with proxies](./images/actor-input-proxies.webp)
393393

394394
Leave it as is and click **Start**. This time, the logs should show `Using proxy: yes`, as the scraper uses proxies provided by the platform:
395395

0 commit comments

Comments
 (0)