Skip to content

Commit 0eb3829

Browse files
authored
Fix URLs after moving repo to new organization (#229)
1 parent 40c2cec commit 0eb3829

File tree

2 files changed

+16
-16
lines changed

2 files changed

+16
-16
lines changed

README.md

Lines changed: 13 additions & 13 deletions
Original file line numberDiff line numberDiff line change
@@ -1,21 +1,21 @@
11
## Introduction
22
Download website to a local directory (including all css, images, js, etc.)
33

4-
[![Build Status](https://img.shields.io/travis/s0ph1e/node-website-scraper/master.svg?style=flat)](https://travis-ci.org/s0ph1e/node-website-scraper)
4+
[![Build Status](https://img.shields.io/travis/website-scraper/node-website-scraper/master.svg?style=flat)](https://travis-ci.org/website-scraper/node-website-scraper)
55
[![Build status](https://ci.appveyor.com/api/projects/status/s7jxui1ngxlbgiav/branch/master?svg=true)](https://ci.appveyor.com/project/s0ph1e/node-website-scraper/branch/master)
6-
[![Test Coverage](https://codeclimate.com/github/s0ph1e/node-website-scraper/badges/coverage.svg)](https://codeclimate.com/github/s0ph1e/node-website-scraper/coverage)
7-
[![Code Climate](https://codeclimate.com/github/s0ph1e/node-website-scraper/badges/gpa.svg)](https://codeclimate.com/github/s0ph1e/node-website-scraper)
8-
[![Dependency Status](https://david-dm.org/s0ph1e/node-website-scraper.svg?style=flat)](https://david-dm.org/s0ph1e/node-website-scraper)
6+
[![Test Coverage](https://codeclimate.com/github/website-scraper/node-website-scraper/badges/coverage.svg)](https://codeclimate.com/github/website-scraper/node-website-scraper/coverage)
7+
[![Code Climate](https://codeclimate.com/github/website-scraper/node-website-scraper/badges/gpa.svg)](https://codeclimate.com/github/website-scraper/node-website-scraper)
8+
[![Dependency Status](https://david-dm.org/website-scraper/node-website-scraper.svg?style=flat)](https://david-dm.org/website-scraper/node-website-scraper)
99

1010
[![Version](https://img.shields.io/npm/v/website-scraper.svg?style=flat)](https://www.npmjs.org/package/website-scraper)
1111
[![Downloads](https://img.shields.io/npm/dm/website-scraper.svg?style=flat)](https://www.npmjs.org/package/website-scraper)
12-
[![Gitter](https://badges.gitter.im/s0ph1e/node-website-scraper.svg)](https://gitter.im/s0ph1e/node-website-scraper?utm_source=badge&utm_medium=badge&utm_campaign=pr-badge)
12+
[![Gitter](https://badges.gitter.im/website-scraper/node-website-scraper.svg)](https://gitter.im/website-scraper/node-website-scraper?utm_source=badge&utm_medium=badge&utm_campaign=pr-badge)
1313

1414
[![NPM Stats](https://nodei.co/npm/website-scraper.png?downloadRank=true&stars=true)](https://www.npmjs.org/package/website-scraper)
1515

16-
You can try it in [demo app](https://scraper.nepochataya.pp.ua/) ([source](https://github.com/s0ph1e/web-scraper))
16+
You can try it in [demo app](https://scraper.nepochataya.pp.ua/) ([source](https://github.com/website-scraper/web-scraper))
1717

18-
**Note:** by default dynamic websites (where content is loaded by js) may be saved not correctly because `website-scraper` doesn't execute js, it only parses http responses for html and css files. If you need to download dynamic website take a look on [website-scraper-phantom](https://github.com/s0ph1e/node-website-scraper-phantom).
18+
**Note:** by default dynamic websites (where content is loaded by js) may be saved not correctly because `website-scraper` doesn't execute js, it only parses http responses for html and css files. If you need to download dynamic website take a look on [website-scraper-phantom](https://github.com/website-scraper/node-website-scraper-phantom).
1919

2020

2121
## Installation
@@ -64,7 +64,7 @@ scrape(options, (error, result) => {
6464
* [onResourceError](#onresourceerror) - callback called when resource's downloading is failed
6565
* [updateMissingSources](#updatemissingsources) - update url for missing sources with absolute url
6666

67-
Default options you can find in [lib/config/defaults.js](https://github.com/s0ph1e/node-website-scraper/blob/master/lib/config/defaults.js) or get them using `scrape.defaults`.
67+
Default options you can find in [lib/config/defaults.js](https://github.com/website-scraper/node-website-scraper/blob/master/lib/config/defaults.js) or get them using `scrape.defaults`.
6868

6969
#### urls
7070
Array of objects which contain urls to download and filenames for them. **_Required_**.
@@ -215,10 +215,10 @@ scrape({
215215
}
216216
}).then(console.log).catch(console.log);
217217
```
218-
Scrape function resolves with array of [Resource](https://github.com/s0ph1e/node-website-scraper/blob/master/lib/resource.js) objects which contain `metadata` property from `httpResponseHandler`.
218+
Scrape function resolves with array of [Resource](https://github.com/website-scraper/node-website-scraper/blob/master/lib/resource.js) objects which contain `metadata` property from `httpResponseHandler`.
219219

220220
#### resourceSaver
221-
Class which saves [Resources](https://github.com/s0ph1e/node-website-scraper/blob/master/lib/resource.js), should have methods `saveResource` and `errorCleanup` which return Promises. Use it to save files where you need: to dropbox, amazon S3, existing directory, etc. By default all files are saved in local file system to new directory passed in `directory` option (see [lib/resource-saver/index.js](https://github.com/s0ph1e/node-website-scraper/blob/master/lib/resource-saver/index.js)).
221+
Class which saves [Resources](https://github.com/website-scraper/node-website-scraper/blob/master/lib/resource.js), should have methods `saveResource` and `errorCleanup` which return Promises. Use it to save files where you need: to dropbox, amazon S3, existing directory, etc. By default all files are saved in local file system to new directory passed in `directory` option (see [lib/resource-saver/index.js](https://github.com/website-scraper/node-website-scraper/blob/master/lib/resource-saver/index.js)).
222222
```javascript
223223
scrape({
224224
urls: ['http://example.com/'],
@@ -231,7 +231,7 @@ scrape({
231231
```
232232

233233
#### onResourceSaved
234-
Function called each time when resource is saved to file system. Callback is called with [Resource](https://github.com/s0ph1e/node-website-scraper/blob/master/lib/resource.js) object. Defaults to `null` - no callback will be called.
234+
Function called each time when resource is saved to file system. Callback is called with [Resource](https://github.com/website-scraper/node-website-scraper/blob/master/lib/resource.js) object. Defaults to `null` - no callback will be called.
235235
```javascript
236236
scrape({
237237
urls: ['http://example.com/'],
@@ -243,7 +243,7 @@ scrape({
243243
```
244244

245245
#### onResourceError
246-
Function called each time when resource's downloading/handling/saving to fs was failed. Callback is called with - [Resource](https://github.com/s0ph1e/node-website-scraper/blob/master/lib/resource.js) object and `Error` object. Defaults to `null` - no callback will be called.
246+
Function called each time when resource's downloading/handling/saving to fs was failed. Callback is called with - [Resource](https://github.com/website-scraper/node-website-scraper/blob/master/lib/resource.js) object and `Error` object. Defaults to `null` - no callback will be called.
247247
```javascript
248248
scrape({
249249
urls: ['http://example.com/'],
@@ -281,7 +281,7 @@ scrape({
281281
## callback
282282
Callback function, optional, includes following parameters:
283283
- `error`: if error - `Error` object, if success - `null`
284-
- `result`: if error - `null`, if success - array of [Resource](https://github.com/s0ph1e/node-website-scraper/blob/master/lib/resource.js) objects containing:
284+
- `result`: if error - `null`, if success - array of [Resource](https://github.com/website-scraper/node-website-scraper/blob/master/lib/resource.js) objects containing:
285285
- `url`: url of loaded page
286286
- `filename`: filename where page was saved (relative to `directory`)
287287
- `children`: array of children Resources

package.json

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -11,7 +11,7 @@
1111
},
1212
"repository": {
1313
"type": "git",
14-
"url": "git://github.com/s0ph1e/node-website-scraper.git"
14+
"url": "git://github.com/website-scraper/node-website-scraper.git"
1515
},
1616
"keywords": [
1717
"scrape",
@@ -29,9 +29,9 @@
2929
"author": "Sophia Antipenko <[email protected]>",
3030
"license": "MIT",
3131
"bugs": {
32-
"url": "https://github.com/s0ph1e/node-website-scraper/issues"
32+
"url": "https://github.com/website-scraper/node-website-scraper/issues"
3333
},
34-
"homepage": "https://github.com/s0ph1e/node-website-scraper",
34+
"homepage": "https://github.com/website-scraper/node-website-scraper",
3535
"dependencies": {
3636
"bluebird": "^3.0.1",
3737
"cheerio": "0.22.0",

0 commit comments

Comments
 (0)