Skip to content

Commit a663e40

Browse files
committed
replace python with py for consistency with js and ts
1 parent 711167c commit a663e40

File tree

6 files changed

+33
-33
lines changed

6 files changed

+33
-33
lines changed

sources/academy/platform/deploying_your_code/index.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -57,7 +57,7 @@ console.log(addAllNumbers(1, 2, 3, 4)); // -> 10
5757
</TabItem>
5858
<TabItem value="Python" label="Python">
5959

60-
```python
60+
```py
6161
# index.py
6262
def add_all_numbers (nums):
6363
total = 0

sources/academy/platform/deploying_your_code/inputs_outputs.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -90,7 +90,7 @@ Cool! When we run `node index.js`, we see **20**.
9090

9191
Alternatively, when writing in a language other than JavaScript, we can create our own `get_input()` function which utilizes the Apify API when the actor is running on the platform. For this example, we are using the [Apify Client](../getting_started/apify_client.md) for Python to access the API.
9292

93-
```Python
93+
```py
9494
# index.py
9595
from apify_client import ApifyClient
9696
from os import environ
@@ -164,7 +164,7 @@ Just as with the custom `get_input()` utility function, you can write a custom `
164164

165165
> You can read and write your output anywhere; however, it is standard practice to use a folder named **storage**.
166166
167-
```Python
167+
```py
168168
# index.py
169169
from apify_client import ApifyClient
170170
from os import environ

sources/academy/platform/getting_started/apify_client.md

Lines changed: 8 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -51,7 +51,7 @@ import { ApifyClient } from 'apify-client';
5151
</TabItem>
5252
<TabItem value="Python" label="Python">
5353

54-
```python
54+
```py
5555
# client.py
5656
from apify_client import ApifyClient
5757

@@ -78,7 +78,7 @@ const client = new ApifyClient({
7878
</TabItem>
7979
<TabItem value="Python" label="Python">
8080

81-
```python
81+
```py
8282
client = ApifyClient(token='YOUR_TOKEN')
8383

8484
```
@@ -103,7 +103,7 @@ const run = await client.actor('YOUR_USERNAME/adding-actor').call({
103103
</TabItem>
104104
<TabItem value="Python" label="Python">
105105

106-
```python
106+
```py
107107
run = client.actor('YOUR_USERNAME/adding-actor').call(run_input={
108108
'num1': 4,
109109
'num2': 2
@@ -134,7 +134,7 @@ const dataset = client.dataset(run.defaultDatasetId);
134134
</TabItem>
135135
<TabItem value="Python" label="Python">
136136

137-
```python
137+
```py
138138
dataset = client.dataset(run['defaultDatasetId'])
139139

140140
```
@@ -156,7 +156,7 @@ console.log(items);
156156
</TabItem>
157157
<TabItem value="Python" label="Python">
158158

159-
```python
159+
```py
160160
items = dataset.list_items().items
161161

162162
print(items)
@@ -194,7 +194,7 @@ console.log(items);
194194
</TabItem>
195195
<TabItem value="Python" label="Python">
196196

197-
```python
197+
```py
198198
# client.py
199199
from apify_client import ApifyClient
200200

@@ -234,7 +234,7 @@ const actor = client.actor('YOUR_USERNAME/adding-actor');
234234
</TabItem>
235235
<TabItem value="Python" label="Python">
236236

237-
```python
237+
```py
238238
actor = client.actor('YOUR_USERNAME/adding-actor')
239239

240240
```
@@ -260,7 +260,7 @@ await actor.update({
260260
</TabItem>
261261
<TabItem value="Python" label="Python">
262262

263-
```python
263+
```py
264264
actor.update(default_run_build='latest', default_run_memory_mbytes=256, default_run_timeout_secs=20)
265265

266266
```

sources/academy/tutorials/api/run_actor_and_retrieve_data_via_api.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -116,7 +116,7 @@ items.forEach((item) => {
116116
</TabItem>
117117
<TabItem value="Python" label="Python">
118118

119-
```python
119+
```py
120120
from apify_client import ApifyClient
121121
client = ApifyClient(token='YOUR_API_TOKEN')
122122

sources/academy/tutorials/python/process_data_using_python.md

Lines changed: 7 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -31,7 +31,7 @@ In the page that opens, you can see your newly created actor. In the **Settings*
3131

3232
First, we'll start with the `requirements.txt` file. Its purpose is to list all the third-party packages that your actor will use. We will be using the `pandas` package for parsing the downloaded weather data, and the `matplotlib` package for visualizing it. We don't particularly care about the specific versions of these packages, so we just list them in the file:
3333

34-
```python
34+
```py
3535
# Add your dependencies here.
3636
# See https://pip.pypa.io/en/latest/cli/pip_install/#requirements-file-format
3737
# for how to format them
@@ -44,7 +44,7 @@ The actor's main logic will live in the `main.py` file. Let's delete everything
4444

4545
Next, we'll import all the packages we will use in the code:
4646

47-
```python
47+
```py
4848
from io import BytesIO
4949
import os
5050

@@ -59,7 +59,7 @@ Next, we need to run the weather scraping actor and access its results. We do th
5959

6060
First, we initialize an `ApifyClient` instance. All the necessary arguments are automatically provided to the actor process as environment variables accessible in Python through the `os.environ` mapping. We need to run the actor from the previous tutorial, which we have named `bbc-weather-scraper`, and wait for it to finish. So, we create a sub-client for working with that actor and run the actor through it. We then check whether the actor run has succeeded. If so, we create a client for working with its default dataset.
6161

62-
```python
62+
```py
6363
# Initialize the main ApifyClient instance
6464
client = ApifyClient(os.environ['APIFY_TOKEN'], api_url=os.environ['APIFY_API_BASE_URL'])
6565

@@ -79,7 +79,7 @@ dataset_client = client.dataset(scraper_run['defaultDatasetId'])
7979

8080
Now, we need to load the data from the dataset to a Pandas dataframe. Pandas supports reading data from a CSV file stream, so we just create a stream with the dataset items in the right format and supply it to `pandas.read_csv()`.
8181

82-
```python
82+
```py
8383
# Load the dataset items into a pandas dataframe
8484
print('Parsing weather data...')
8585
dataset_items_stream = dataset_client.stream_items(item_format='csv')
@@ -88,7 +88,7 @@ weather_data = pandas.read_csv(dataset_items_stream, parse_dates=['datetime'], d
8888

8989
Once we have the data loaded, we can process it. Each data row comes as three fields: `datetime`, `location` and `temperature`. We would like to transform the data so that we have the datetimes in one column, and the temperatures for each location at that datetime in separate columns, one for each location. To achieve this, we use the `.pivot()` method on the dataframe. Since the temperature varies considerably between day and night, and we would like to get an overview of the temperature trends over a longer period of time, we calculate a rolling average of the temperatures with a 24-hour window.
9090

91-
```python
91+
```py
9292
# Transform data to a pivot table for easier plotting
9393
pivot = weather_data.pivot(index='datetime', columns='location', values='temperature')
9494
mean_daily_temperatures = pivot.rolling(window='24h', min_periods=24, center=True).mean()
@@ -98,7 +98,7 @@ mean_daily_temperatures = pivot.rolling(window='24h', min_periods=24, center=Tru
9898

9999
With the data processed, we can then make a plot of the results. For that, we use the `.plot()` method of the dataframe, which creates a figure with the plot, using the Matplotlib library internally. We set the right titles and labels to the plot, and apply some additional formatting to achieve a nicer result.
100100

101-
```python
101+
```py
102102
# Create a plot of the data
103103
print('Plotting the data...')
104104
axes = mean_daily_temperatures.plot(figsize=(10, 5))
@@ -112,7 +112,7 @@ axes.figure.tight_layout()
112112

113113
As the last step, we need to save the plot to a record in a [key-value store](/platform/storage/key-value-store) on the Apify platform, so that we can access it later. We save the rendered figure with the plot to an in-memory buffer, and then save the contents of that buffer to the default key-value store of the actor run through its resource subclient.
114114

115-
```python
115+
```py
116116
# Get the resource sub-client for working with the default key-value store of the run
117117
key_value_store_client = client.key_value_store(os.environ['APIFY_DEFAULT_KEY_VALUE_STORE_ID'])
118118

sources/academy/tutorials/python/scrape_data_python.md

Lines changed: 14 additions & 14 deletions
Original file line numberDiff line numberDiff line change
@@ -63,7 +63,7 @@ In the page that opens, you can see your newly created actor. In the **Settings*
6363

6464
First we'll start with the `requirements.txt` file. Its purpose is to list all the third-party packages that your actor will use. We will be using the `requests` package for downloading the BBC Weather pages, and the `beautifulsoup4` package for parsing and processing the downloaded pages. We don't particularly care about the specific versions of these packages, so we just list them in the file:
6565

66-
```python
66+
```py
6767
# Add your dependencies here.
6868
# See https://pip.pypa.io/en/latest/cli/pip_install/#requirements-file-format
6969
# for how to format them
@@ -78,7 +78,7 @@ Finally, we can get to writing the main logic for the actor, which will live in
7878

7979
First, we need to import all the packages we will use in the code:
8080

81-
```python
81+
```py
8282
from datetime import datetime, time, timedelta, timezone
8383
import os
8484
import re
@@ -90,7 +90,7 @@ import requests
9090

9191
Next, let's set up the locations we want to scrape in a constant for easier reference and, optionally, modification.
9292

93-
```python
93+
```py
9494
# Locations which to scrape and their BBC Weather IDs
9595
LOCATIONS = [
9696
('Prague', '3067696'),
@@ -103,7 +103,7 @@ LOCATIONS = [
103103

104104
We'll be scraping each location separately. For each location, we need to know in which timezone it resides and what is the first displayed date in the weather forecast for that location. We will scrape each of the 14 forecast days one by one. For each day, we will first download its forecast page using the `requests` library, and then parse the downloaded HTML using the `BeautifulSoup` parser:
105105

106-
```python
106+
```py
107107
# List with scraped results
108108
weather_data = []
109109

@@ -126,7 +126,7 @@ First, we extract the timezone from the second element with class `wr-c-footer-t
126126

127127
Afterwards, we can figure out which date is represented by the first displayed day. We find the element with the class `wr-day--active` containing the header for the currently displayed day. Inside it, we find the element with the title of that day, which has the class `wr-day__title`. This element has the accessibility label containing the actual date of the day in its `aria-label` attribute, but it contains only the day and month and not the year, so we can't use it directly. Instead, to get the full date of the first displayed day, we compare the day from the accessibility label and the day from the current datetime at the location. If they match, we know the first displayed date is the current date at the location. If they don't, we know the first displayed date is the day before the current date at the location.
128128

129-
```python
129+
```py
130130
# When parsing the first day, find out what day it represents,
131131
# to know when do the results start
132132
if day_offset == 0:
@@ -162,7 +162,7 @@ To get the datetime of each slot, we need to combine the date of the first displ
162162

163163
Finally, we can put all the extracted information together and push them to the array holding the resulting data.
164164

165-
```python
165+
```py
166166
# Go through the elements for each displayed time slot of the displayed day
167167
slot_container = soup.find(class_='wr-time-slot-container__slots')
168168
for slot in slot_container.find_all(class_='wr-time-slot'):
@@ -192,7 +192,7 @@ As the last step, we need to store the scraped data in a dataset on the Apify pl
192192

193193
First, we initialize an `ApifyClient` instance. All the necessary arguments are automatically provided to the actor process as environment variables accessible in Python through the `os.environ` mapping. We will save the data into the default dataset belonging to the actor run, so we create a sub-client for working with that dataset, and push the data into it using its `.push_items(...)` method.
194194

195-
```python
195+
```py
196196
# Initialize the main ApifyClient instance
197197
client = ApifyClient(os.environ['APIFY_TOKEN'], api_url=os.environ['APIFY_API_BASE_URL'])
198198

@@ -231,7 +231,7 @@ In the page that opens, you can see your newly created actor. In the **Settings*
231231

232232
First, we'll start with the `requirements.txt` file. Its purpose is to list all the third-party packages that your actor will use. We will be using the `pandas` package for parsing the downloaded weather data, and the `matplotlib` package for visualizing it. We don't particularly care about the specific versions of these packages, so we just list them in the file:
233233

234-
```python
234+
```py
235235
# Add your dependencies here.
236236
# See https://pip.pypa.io/en/latest/cli/pip_install/#requirements-file-format
237237
# for how to format them
@@ -244,7 +244,7 @@ The actor's main logic will live in the `main.py` file. Let's delete everything
244244

245245
Next, we'll import all the packages we will use in the code:
246246

247-
```python
247+
```py
248248
from io import BytesIO
249249
import os
250250

@@ -259,7 +259,7 @@ Next, we need to run the weather scraping actor and access its results. We do th
259259

260260
First, we initialize an `ApifyClient` instance. All the necessary arguments are automatically provided to the actor process as environment variables accessible in Python through the `os.environ` mapping. We need to run the actor from the previous tutorial, which we have named `bbc-weather-scraper`, and wait for it to finish. So, we create a sub-client for working with that actor and run the actor through it. We then check whether the actor run has succeeded. If so, we create a client for working with its default dataset.
261261

262-
```python
262+
```py
263263
# Initialize the main ApifyClient instance
264264
client = ApifyClient(os.environ['APIFY_TOKEN'], api_url=os.environ['APIFY_API_BASE_URL'])
265265

@@ -279,7 +279,7 @@ dataset_client = client.dataset(scraper_run['defaultDatasetId'])
279279

280280
Now, we need to load the data from the dataset to a Pandas dataframe. Pandas supports reading data from a CSV file stream, so we just create a stream with the dataset items in the right format and supply it to `pandas.read_csv()`.
281281

282-
```python
282+
```py
283283
# Load the dataset items into a pandas dataframe
284284
print('Parsing weather data...')
285285
dataset_items_stream = dataset_client.stream_items(item_format='csv')
@@ -288,7 +288,7 @@ weather_data = pandas.read_csv(dataset_items_stream, parse_dates=['datetime'], d
288288

289289
Once we have the data loaded, we can process it. Each data row comes as three fields: `datetime`, `location` and `temperature`. We would like to transform the data so that we have the datetimes in one column, and the temperatures for each location at that datetime in separate columns, one for each location. To achieve this, we use the `.pivot()` method on the dataframe. Since the temperature varies considerably between day and night, and we would like to get an overview of the temperature trends over a longer period of time, we calculate a rolling average of the temperatures with a 24-hour window.
290290

291-
```python
291+
```py
292292
# Transform data to a pivot table for easier plotting
293293
pivot = weather_data.pivot(index='datetime', columns='location', values='temperature')
294294
mean_daily_temperatures = pivot.rolling(window='24h', min_periods=24, center=True).mean()
@@ -298,7 +298,7 @@ mean_daily_temperatures = pivot.rolling(window='24h', min_periods=24, center=Tru
298298

299299
With the data processed, we can then make a plot of the results. For that, we use the `.plot()` method of the dataframe, which creates a figure with the plot, using the Matplotlib library internally. We set the right titles and labels to the plot, and apply some additional formatting to achieve a nicer result.
300300

301-
```python
301+
```py
302302
# Create a plot of the data
303303
print('Plotting the data...')
304304
axes = mean_daily_temperatures.plot(figsize=(10, 5))
@@ -312,7 +312,7 @@ axes.figure.tight_layout()
312312

313313
As the last step, we need to save the plot to a record in a [key-value store](/platform/storage/key-value-store) on the Apify platform, so that we can access it later. We save the rendered figure with the plot to an in-memory buffer, and then save the contents of that buffer to the default key-value store of the actor run through its resource subclient.
314314

315-
```python
315+
```py
316316
# Get the resource sub-client for working with the default key-value store of the run
317317
key_value_store_client = client.key_value_store(os.environ['APIFY_DEFAULT_KEY_VALUE_STORE_ID'])
318318

0 commit comments

Comments
 (0)