You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: sources/academy/webscraping/scraping_basics_python/02_devtools_locating_elements.md
+2-2Lines changed: 2 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -120,7 +120,7 @@ Multiple approaches often exist for creating a CSS selector that targets the ele
120
120
121
121
The product card has four classes: `product-item`, `product-item--vertical`, `1/3--tablet-and-up`, and `1/4--desk`. Only the first one checks all the boxes. A product card *is* a product item, after all. The others seem more about styling—defining how the element looks on the screen—and are probably tied to CSS rules.
122
122
123
-
This class is also unique enough in the page's context. If it were something generic like `item`, there'd be a higher risk that developers of the website might use it for unrelated elements. In the **Elements** tab, you can see a parent element `product-list` that contains all the product cards marked as `product-item`. This structure aligns with the data we're after.
123
+
This class is also unique enough in the page's context. If it were something generic like `item`, there would be a higher risk that developers of the website might use it for unrelated elements. In the **Elements** tab, you can see a parent element `product-list` that contains all the product cards marked as `product-item`. This structure aligns with the data we're after.
124
124
125
125

126
126
@@ -198,7 +198,7 @@ Go to Guardian's [page about F1](https://www.theguardian.com/sport/formulaone).
198
198
199
199
Hint: Learn about the [descendant combinator](https://developer.mozilla.org/en-US/docs/Web/CSS/Descendant_combinator).
200
200
201
-

201
+

**In this lesson we'll use the browser tools for developers to manually extract product data from an e-commerce website.**
10
12
11
13
---
12
14
13
-
:::danger Work in Progress
15
+
In our pursuit to scrape products from the [Sales page](https://warehouse-theme-metal.myshopify.com/collections/sales), we've been able to locate parent elements containing relevant data. Now how do we extract the data?
16
+
17
+
## Finding product details
18
+
19
+
Previously, we've figured out how to save the subwoofer product card to a variable in the **Console**:
The product details are within the element as text, so maybe if we extract the text, we could work out the individual values?
27
+
28
+
```js
29
+
subwoofer.textContent;
30
+
```
31
+
32
+
That indeed outputs all the text, but in a form which would be hard to break down to relevant pieces.
33
+
34
+

35
+
36
+
We'll need to first locate relevant child elements and extract the data from each of them individually.
37
+
38
+
## Extracting title
39
+
40
+
We'll use the **Elements** tab of DevTools to inspect all child elements of the product card for the Sony subwoofer. We can see that the title of the product is inside an `a` element with several classes. From those the `product-item__title` seems like a great choice to locate the element.
JavaScript represents HTML elements as [Element](https://developer.mozilla.org/en-US/docs/Web/API/Element) objects. Among properties we've already played with, such as `textContent` or `outerHTML`, it also has the [`querySelector()`](https://developer.mozilla.org/en-US/docs/Web/API/Element/querySelector) method. Here the method looks for matches only within children of the element:
45
+
46
+
```js
47
+
title =subwoofer.querySelector('.product-item__title');
48
+
title.textContent;
49
+
```
50
+
51
+
Notice we're calling `querySelector()` on the `subwoofer` variable, not `document`. And just like this, we've scraped our first piece of data! We've extracted the product title:
To figure out how to get the price, we'll use the **Elements** tab of DevTools again. We notice there are two prices, a regular price and a sale price. For the purposes of watching prices we'll need the sale price. Both are `span` elements with the `price` class.
We could either rely on the fact that the sale price is likely to be always the one which is highlighted, or that it's always the first price. For now we'll rely on the later and we'll let `querySelector()` to simply return the first result:
62
+
63
+
```js
64
+
price =subwoofer.querySelector('.price');
65
+
price.textContent;
66
+
```
67
+
68
+
It works, but the price isn't alone in the result. Before we'd use such data, we'd need to do some **data cleaning**:
But for now that's okay. We're just testing the waters now, so that we have an idea about what our scraper will need to do. Once we'll get to extracting prices in Python, we'll figure out how to get the values as numbers.
73
+
74
+
In the next lesson, we'll start with our Python project. First we'll be figuring out how to download the Sales page without browser and make it accessible in a Python program.
75
+
76
+
---
77
+
78
+
<Exercises />
79
+
80
+
### Extract the price of IKEA's most expensive artificial plant
81
+
82
+
At IKEA's [Artificial plants & flowers listing](https://www.ikea.com/se/en/cat/artificial-plants-flowers-20492/), use CSS selectors and HTML elements manipulation in the **Console** to extract the price of the most expensive artificial plant (sold in Sweden, as you'll be browsing their Swedish offer). Before opening DevTools, use your judgment to adjust the page to make the task as straightforward as possible. Finally, use JavaScript's [`parseInt()`](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/parseInt) function to convert the price text into a number.
83
+
84
+
<details>
85
+
<summary>Solution</summary>
86
+
87
+
1. Open the [Artificial plants & flowers listing](https://www.ikea.com/se/en/cat/artificial-plants-flowers-20492/).
88
+
1. Sort the products by price, from high to low, so the most expensive plant appears first in the listing.
89
+
1. Activate the element selection tool in your DevTools.
90
+
1. Click on the price of the first and most expensive plant.
91
+
1. Notice that the price is structured into two elements, with the integer separated from the currency, under a class named `plp-price__integer`. This structure is convenient for extracting the value.
92
+
1. In the **Console**, execute `document.querySelector('.plp-price__integer')`. This returns the element representing the first price in the listing. Since `document.querySelector()` returns the first matching element, it directly selects the most expensive plant's price.
93
+
1. Save the element in a variable by executing `price = document.querySelector('.plp-price__integer')`.
94
+
1. Convert the price text into a number by executing `parseInt(price.textContent)`.
95
+
1. At the time of writing, this returns `699`, meaning [699 SEK](https://www.google.com/search?q=699%20sek).
96
+
97
+
</details>
98
+
99
+
### Extract the name of the top wiki on Fandom Movies
100
+
101
+
On Fandom's [Movies page](https://www.fandom.com/topics/movies), use CSS selectors and HTML element manipulation in the **Console** to extract the name of the top wiki. Use JavaScript's [`trim()`](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/String/trim) method to remove white space around the name.
102
+
103
+

104
+
105
+
<details>
106
+
<summary>Solution</summary>
107
+
108
+
1. Open the [Movies page](https://www.fandom.com/topics/movies).
109
+
1. Activate the element selection tool in your DevTools.
110
+
1. Click on the list item for the top Fandom wiki in the category.
111
+
1. Notice that it has a class `topic_explore-wikis__link`.
112
+
1. In the **Console**, execute `document.querySelector('.topic_explore-wikis__link')`. This returns the element representing the top list item. They use the selector only for the **Top Wikis** list, and because `document.querySelector()` returns the first matching element, you're almost done.
113
+
1. Save the element in a variable by executing `item = document.querySelector('.topic_explore-wikis__link')`.
114
+
1. Get the element's text without extra white space by executing `item.textContent.trim()`. At the time of writing, this returns `"Pixar Wiki"`.
115
+
116
+
</details>
117
+
118
+
### Extract details about the first post on Guardian's F1 news
119
+
120
+
On the Guardian's [F1 news page](https://www.theguardian.com/sport/formulaone), use CSS selectors and HTML manipulation in the **Console** to extract details about the first post. Specifically, extract its title, lead paragraph, and URL of the associated photo.
This lesson is under development. Please read [Extracting data with DevTools](../scraping_basics_javascript/data_extraction/devtools_continued.md) in the meantime so you can follow the upcoming lessons.
127
+
1. Open the [F1 news page](https://www.theguardian.com/sport/formulaone).
128
+
1. Activate the element selection tool in your DevTools.
129
+
1. Click on the first post.
130
+
1. Notice that the markup does not provide clear, reusable class names for this task. The structure uses generic tags and randomized classes, requiring you to rely on the element hierarchy and order instead.
131
+
1. In the **Console**, execute `post = document.querySelector('#maincontent ul li')`. This returns the element representing the first post.
132
+
1. Extract the post's title by executing `post.querySelector('h3').textContent`.
133
+
1. Extract the lead paragraph by executing `post.querySelector('span div').textContent`.
134
+
1. Extract the photo URL by executing `post.querySelector('img').src`.
0 commit comments