You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: README.md
+5-4Lines changed: 5 additions & 4 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -185,10 +185,11 @@ Why Crawlee is the preferred choice for web scraping and crawling?
185
185
186
186
### Why to use Crawlee rather than Scrapy?
187
187
188
-
- Crawlee has out-of-the-box support for **headless browser** crawling (Playwright).
189
-
- Crawlee has a **minimalistic & elegant interface** - Set up your scraper with fewer than 10 lines of code.
190
-
- Complete **type hint** coverage.
191
-
- Based on standard **Asyncio**.
188
+
-**Asyncio-based** – Leveraging the standard [Asyncio](https://docs.python.org/3/library/asyncio.html) library, Crawlee delivers better performance and seamless compatibility with other modern asynchronous libraries.
189
+
-**Type hints** – Newer project built with modern Python, and complete type hint coverage for a better developer experience.
190
+
-**Simple integration** – Crawlee crawlers are regular Python scripts, requiring no additional launcher executor. This flexibility allows to integrate a crawler directly into other applications.
191
+
-**State persistence** – Supports state persistence during interruptions, saving time and costs by avoiding the need to restart scraping pipelines from scratch after an issue.
192
+
-**Organized data storages** – Allows saving of multiple types of results in a single scraping run. Offers several storing options (see [datasets](https://crawlee.dev/python/api/class/Dataset) & [key-value stores](https://crawlee.dev/python/api/class/KeyValueStore)).
0 commit comments