Skip to content

Commit 49fd16e

Browse files
authored
fix: revert details component rendering to pre-v3 state (#1207)
1 parent 35f50e1 commit 49fd16e

File tree

7 files changed

+26
-32
lines changed

7 files changed

+26
-32
lines changed

apify-docs-theme/src/theme/MDXComponents/Details.js

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -6,7 +6,7 @@ export default function MDXDetails(props) {
66
// Split summary item from the rest to pass it as a separate prop to the
77
// Details theme component
88
const summary = items.find(
9-
(item) => React.isValidElement(item) && item.props?.mdxType === 'summary',
9+
(item) => React.isValidElement(item) && item.type === 'summary',
1010
);
1111
const children = <>{items.filter((item) => item !== summary)}</>;
1212
return (

apify-docs-theme/src/theme/MDXComponents/index.js

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -17,7 +17,7 @@ const MDXComponents = {
1717
code: MDXCode,
1818
a: MDXA,
1919
pre: MDXPre,
20-
details: MDXDetails,
20+
Details: MDXDetails,
2121
ul: MDXUl,
2222
img: MDXImg,
2323
h1: (props) => <MDXHeading as="h1" {...props} />,

sources/academy/webscraping/scraping_basics_python/04_downloading_html.md

Lines changed: 6 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -7,7 +7,6 @@ slug: /scraping-basics-python/downloading-html
77
---
88

99
import Exercises from './_exercises.mdx';
10-
import Details from '@theme/Details';
1110

1211
**In this lesson we'll start building a Python application for watching prices. As a first step, we'll use the HTTPX library to download HTML code of a product listing page.**
1312

@@ -149,7 +148,7 @@ Download HTML of a product listing page, but this time from a real world e-comme
149148
https://www.amazon.com/s?k=darth+vader
150149
```
151150

152-
<Details>
151+
<details>
153152
<summary>Solution</summary>
154153

155154
```py
@@ -162,7 +161,7 @@ https://www.amazon.com/s?k=darth+vader
162161
```
163162

164163
If you get `Server error '503 Service Unavailable'`, that's just Amazon's anti-scraping protections. You can learn about how to overcome those in our [Anti-scraping protections](../anti_scraping/index.md) course.
165-
</Details>
164+
</details>
166165

167166
### Save downloaded HTML as a file
168167

@@ -172,7 +171,7 @@ Download HTML, then save it on your disk as a `products.html` file. You can use
172171
https://warehouse-theme-metal.myshopify.com/collections/sales
173172
```
174173

175-
<Details>
174+
<details>
176175
<summary>Solution</summary>
177176

178177
Right in your Terminal or Command Prompt, you can create files by _redirecting output_ of command line programs:
@@ -193,7 +192,7 @@ https://warehouse-theme-metal.myshopify.com/collections/sales
193192
Path("products.html").write_text(response.text)
194193
```
195194

196-
</Details>
195+
</details>
197196

198197
### Download an image as a file
199198

@@ -203,7 +202,7 @@ Download a product image, then save it on your disk as a file. While HTML is _te
203202
https://warehouse-theme-metal.myshopify.com/cdn/shop/products/sonyxbr55front_f72cc8ff-fcd6-4141-b9cc-e1320f867785.jpg
204203
```
205204

206-
<Details>
205+
<details>
207206
<summary>Solution</summary>
208207

209208
Python offers several ways how to create files. The solution below uses [pathlib](https://docs.python.org/3/library/pathlib.html):
@@ -218,4 +217,4 @@ https://warehouse-theme-metal.myshopify.com/cdn/shop/products/sonyxbr55front_f72
218217
Path("tv.jpg").write_bytes(response.content)
219218
```
220219

221-
</Details>
220+
</details>

sources/academy/webscraping/scraping_basics_python/05_parsing_html.md

Lines changed: 4 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -7,7 +7,6 @@ slug: /scraping-basics-python/parsing-html
77
---
88

99
import Exercises from './_exercises.mdx';
10-
import Details from '@theme/Details';
1110

1211
**In this lesson we'll look for products in the downloaded HTML. We'll use BeautifulSoup to turn the HTML into objects which we can work with in our Python program.**
1312

@@ -121,7 +120,7 @@ Print a total count of F1 teams listed on this page:
121120
https://www.formula1.com/en/teams
122121
```
123122

124-
<Details>
123+
<details>
125124
<summary>Solution</summary>
126125

127126
```py
@@ -137,13 +136,13 @@ https://www.formula1.com/en/teams
137136
print(len(soup.select(".outline")))
138137
```
139138

140-
</Details>
139+
</details>
141140

142141
### Scrape F1 drivers
143142

144143
Use the same URL as in the previous exercise, but this time print a total count of F1 drivers.
145144

146-
<Details>
145+
<details>
147146
<summary>Solution</summary>
148147

149148
```py
@@ -159,4 +158,4 @@ Use the same URL as in the previous exercise, but this time print a total count
159158
print(len(soup.select(".f1-grid")))
160159
```
161160

162-
</Details>
161+
</details>

sources/academy/webscraping/scraping_basics_python/06_locating_elements.md

Lines changed: 6 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -7,7 +7,6 @@ slug: /scraping-basics-python/locating-elements
77
---
88

99
import Exercises from './_exercises.mdx';
10-
import Details from '@theme/Details';
1110

1211
**In this lesson we'll locate product data in the downloaded HTML. We'll use BeautifulSoup to find those HTML elements which contain details about each product, such as title or price.**
1312

@@ -215,7 +214,7 @@ Botswana
215214
...
216215
```
217216

218-
<Details>
217+
<details>
219218
<summary>Solution</summary>
220219

221220
```py
@@ -240,7 +239,7 @@ Botswana
240239

241240
Because some rows contain [table headers](https://developer.mozilla.org/en-US/docs/Web/HTML/Element/th), we skip processing a row if `table_row.select("td")` doesn't find any [table data](https://developer.mozilla.org/en-US/docs/Web/HTML/Element/td) cells.
242241

243-
</Details>
242+
</details>
244243

245244
### Use CSS selectors to their max
246245

@@ -249,7 +248,7 @@ Simplify the code from previous exercise. Use a single for loop and a single CSS
249248
- [Descendant combinator](https://developer.mozilla.org/en-US/docs/Web/CSS/Descendant_combinator)
250249
- [`:nth-child()` pseudo-class](https://developer.mozilla.org/en-US/docs/Web/CSS/:nth-child)
251250

252-
<Details>
251+
<details>
253252
<summary>Solution</summary>
254253

255254
```py
@@ -267,7 +266,7 @@ Simplify the code from previous exercise. Use a single for loop and a single CSS
267266
print(name_cell.select_one("a").text)
268267
```
269268

270-
</Details>
269+
</details>
271270

272271
### Scrape F1 news
273272

@@ -286,7 +285,7 @@ Max Verstappen wins Canadian Grand Prix: F1 – as it happened
286285
...
287286
```
288287

289-
<Details>
288+
<details>
290289
<summary>Solution</summary>
291290

292291
```py
@@ -304,4 +303,4 @@ Max Verstappen wins Canadian Grand Prix: F1 – as it happened
304303
print(title.text)
305304
```
306305

307-
</Details>
306+
</details>

sources/academy/webscraping/scraping_basics_python/07_extracting_data.md

Lines changed: 6 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -7,7 +7,6 @@ slug: /scraping-basics-python/extracting-data
77
---
88

99
import Exercises from './_exercises.mdx';
10-
import Details from '@theme/Details';
1110

1211
**In this lesson we'll finish extracting product data from the downloaded HTML. With help of basic string manipulation we'll focus on cleaning and correctly representing the product price.**
1312

@@ -225,7 +224,7 @@ Denon AH-C720 In-Ear Headphones 236
225224
...
226225
```
227226

228-
<Details>
227+
<details>
229228
<summary>Solution</summary>
230229

231230
```py
@@ -260,13 +259,13 @@ Denon AH-C720 In-Ear Headphones 236
260259
print(title, units)
261260
```
262261

263-
</Details>
262+
</details>
264263

265264
### Use regular expressions
266265

267266
Simplify the code from previous exercise. Use [regular expressions](https://docs.python.org/3/library/re.html) to parse the number of units. You can match digits using a range like `[0-9]` or by a special sequence `\d`. To match more characters of the same type you can use `+`.
268267

269-
<Details>
268+
<details>
270269
<summary>Solution</summary>
271270

272271
```py
@@ -293,7 +292,7 @@ Simplify the code from previous exercise. Use [regular expressions](https://docs
293292
print(title, units)
294293
```
295294

296-
</Details>
295+
</details>
297296

298297
### Scrape publish dates of F1 news
299298

@@ -319,7 +318,7 @@ Hints:
319318
- In Python you can create `datetime` objects using `datetime.fromisoformat()`, a [built-in method for parsing ISO 8601 strings](https://docs.python.org/3/library/datetime.html#datetime.datetime.fromisoformat).
320319
- To get just the date part, you can call `.date()` on any `datetime` object.
321320

322-
<Details>
321+
<details>
323322
<summary>Solution</summary>
324323

325324
```py
@@ -344,4 +343,4 @@ Hints:
344343
print(title, published_on)
345344
```
346345

347-
</Details>
346+
</details>

sources/platform/console/index.md

Lines changed: 2 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -10,8 +10,6 @@ slug: /console
1010

1111
---
1212

13-
import Details from '@theme/Details';
14-
1513
## Sign-up
1614

1715
To use Apify Console, you first need to create an account. To create it please go to the [sign-up page](https://console.apify.com/sign-up).
@@ -95,7 +93,7 @@ Use the side menu to navigate other parts of Apify Console easily.
9593

9694
You can also navigate Apify Console via keyboard shortcuts.
9795

98-
<Details>
96+
<details>
9997
<summary>Keyboard Shortcuts</summary>
10098

10199
|Shortcut| Tab |
@@ -113,7 +111,7 @@ You can also navigate Apify Console via keyboard shortcuts.
113111
|Settings| GS |
114112
|Billing| GB |
115113

116-
</Details>
114+
</details>
117115

118116
| Tab name | Description |
119117
|:---|:---|

0 commit comments

Comments
 (0)