You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
* replaceLinks option added
* fix problem with not using self intead of this
* oops, forgot to return.
* fix tests
* Rename new option to updateSources and adjust description
* Fix recursive test for node 10
* Fix tests for old node
*[onResourceError](#onresourceerror) - callback called when resource's downloading is failed
65
65
*[updateMissingSources](#updatemissingsources) - update url for missing sources with absolute url
66
66
*[requestConcurrency](#requestconcurrency) - set maximum concurrent requests
67
+
*[updateSources](#updateSources) - set to false to keep all html content unmodified
67
68
68
69
Default options you can find in [lib/config/defaults.js](https://github.com/website-scraper/node-website-scraper/blob/master/lib/config/defaults.js) or get them using `scrape.defaults`.
69
70
@@ -296,6 +297,12 @@ scrape({
296
297
Number, maximum amount of concurrent requests. Defaults to `Infinity`.
297
298
298
299
300
+
#### updateSources
301
+
Boolean. Defaults to `true`. Use `false` when scraped site structure does not
302
+
fit your custom filename generator or if you do not want html content to be
303
+
modified in any way.
304
+
305
+
299
306
## callback
300
307
Callback function, optional, includes following parameters:
301
308
-`error`: if error - `Error` object, if success - `null`
0 commit comments