Skip to content

Commit 6c03858

Browse files
authored
Merge pull request #964 from j-f1/merge-wiki
Copy the wiki into the docs directory
2 parents 2eb8654 + 73eab0b commit 6c03858

File tree

5 files changed

+459
-1
lines changed

5 files changed

+459
-1
lines changed

README.md

Lines changed: 27 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -10,7 +10,26 @@ Keep track of development news:
1010
* Watch the repository on [GitHub](https://github.com/freeCodeCamp/devdocs/subscription)
1111
* Follow [@DevDocs](https://twitter.com/DevDocs) on Twitter
1212

13-
**Table of Contents:** [Quick Start](#quick-start) · [Vision](#vision) · [App](#app) · [Scraper](#scraper) · [Commands](#available-commands) · [Contributing](#contributing) · [License](#copyright--license) · [Questions?](#questions)
13+
**Table of Contents:** [Plugins and Extensions](#plugins-and-extensions)[Quick Start](#quick-start) · [Vision](#vision) · [App](#app) · [Scraper](#scraper) · [Commands](#available-commands) · [Contributing](#contributing) · [Documentation](#documentation)[License](#copyright--license) · [Questions?](#questions)
14+
15+
## Plugins and Extensions
16+
17+
* [Chrome web app](https://chrome.google.com/webstore/detail/devdocs/mnfehgbmkapmjnhcnbodoamcioleeooe)
18+
* [Ubuntu Touch app](https://uappexplorer.com/app/devdocsunofficial.berkes)
19+
* [Sublime Text plugin](https://sublime.wbond.net/packages/DevDocs)
20+
* [Atom plugin](https://atom.io/packages/devdocs)
21+
* [Brackets extension](https://github.com/gruehle/dev-docs-viewer)
22+
* [Fluid](http://fluidapp.com) for turning DevDocs into a real OS X app
23+
* [GTK shell / Vim integration](https://github.com/naquad/devdocs-shell)
24+
* [Emacs lookup](https://github.com/skeeto/devdocs-lookup)
25+
* [Alfred Workflow](https://github.com/yannickglt/alfred-devdocs)
26+
* [Vim search plugin with Devdocs in its defaults](https://github.com/waiting-for-dev/vim-www) Just set `let g:www_shortcut_engines = { 'devdocs': ['Devdocs', '<leader>dd'] }` to have a `:Devdocs` command and a `<leader>dd` mapping.
27+
* [Visual Studio Code plugin](https://marketplace.visualstudio.com/items?itemName=akfish.vscode-devdocs ) (1)
28+
* [Visual Studio Code plugin](https://marketplace.visualstudio.com/items?itemName=deibit.devdocs) (2)
29+
* [Desktop application](https://github.com/egoist/devdocs-desktop)
30+
* [Doc Browser](https://github.com/qwfy/doc-browser) is a native Linux app that supports DevDocs docsets
31+
* [GNOME Application](https://github.com/hardpixel/devdocs-desktop) GTK3 application with search integrated in headerbar
32+
1433

1534
## Quick Start
1635

@@ -132,6 +151,13 @@ Contributions are welcome. Please read the [contributing guidelines](https://git
132151

133152
DevDocs's own documentation is available on the [wiki](https://github.com/freeCodeCamp/devdocs/wiki).
134153

154+
## Documentation
155+
156+
* [Adding documentations to DevDocs](./docs/adding-docs.md)
157+
* [Scraper Reference](./docs/scraper-reference.md)
158+
* [Filter Reference](./docs/filter-reference.md)
159+
* [Maintainers’ Guide](./docs/maintainers.md)
160+
135161
## Copyright / License
136162

137163
Copyright 2013-2019 Thibaut Courouble and [other contributors](https://github.com/freeCodeCamp/devdocs/graphs/contributors)

docs/Filter-Reference.md

Lines changed: 224 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,224 @@
1+
**Table of contents:**
2+
3+
* [Overview](#overview)
4+
* [Instance methods](#instance-methods)
5+
* [Core filters](#core-filters)
6+
* [Custom filters](#custom-filters)
7+
- [CleanHtmlFilter](#cleanhtmlfilter)
8+
- [EntriesFilter](#entriesfilter)
9+
10+
## Overview
11+
12+
Filters use the [HTML::Pipeline](https://github.com/jch/html-pipeline) library. They take an HTML string or [Nokogiri](http://nokogiri.org/) node as input, optionally perform modifications and/or extract information from it, and then outputs the result. Together they form a pipeline where each filter hands its output to the next filter's input. Every documentation page passes through this pipeline before being copied on the local filesystem.
13+
14+
Filters are subclasses of the [`Docs::Filter`](https://github.com/Thibaut/devdocs/blob/master/lib/docs/core/filter.rb) class and require a `call` method. A basic implementation looks like this:
15+
16+
```ruby
17+
module Docs
18+
class CustomFilter < Filter
19+
def call
20+
doc
21+
end
22+
end
23+
end
24+
```
25+
26+
Filters which manipulate the Nokogiri node object (`doc` and related methods) are _HTML filters_ and must not manipulate the HTML string (`html`). Vice-versa, filters which manipulate the string representation of the document are _text filters_ and must not manipulate the Nokogiri node object. The two types are divided into two stacks within the scrapers. These stacks are then combined into a pipeline that calls the HTML filters before the text filters (more details [here](./scraper-reference.md#filter-stacks)). This is to avoid parsing the document multiple times.
27+
28+
The `call` method must return either `doc` or `html`, depending on the type of filter.
29+
30+
## Instance methods
31+
32+
* `doc` [Nokogiri::XML::Node]
33+
The Nokogiri representation of the container element.
34+
See [Nokogiri's API docs](http://www.rubydoc.info/github/sparklemotion/nokogiri/Nokogiri/XML/Node) for the list of available methods.
35+
36+
* `html` [String]
37+
The string representation of the container element.
38+
39+
* `context` [Hash] **(frozen)**
40+
The scraper's `options` along with a few additional keys: `:base_url`, `:root_url`, `:root_page` and `:url`.
41+
42+
* `result` [Hash]
43+
Used to store the page's metadata and pass back information to the scraper.
44+
Possible keys:
45+
46+
- `:path` — the page's normalized path
47+
- `:store_path` — the path where the page will be stored (equal to `:path` with `.html` at the end)
48+
- `:internal_urls` — the list of distinct internal URLs found within the page
49+
- `:entries` — the [`Entry`](https://github.com/Thibaut/devdocs/blob/master/lib/docs/core/models/entry.rb) objects to add to the index
50+
51+
* `css`, `at_css`, `xpath`, `at_xpath`
52+
Shortcuts for `doc.css`, `doc.xpath`, etc.
53+
54+
* `base_url`, `current_url`, `root_url` [Docs::URL]
55+
Shortcuts for `context[:base_url]`, `context[:url]`, and `context[:root_url]` respectively.
56+
57+
* `root_path` [String]
58+
Shortcut for `context[:root_path]`.
59+
60+
* `subpath` [String]
61+
The sub-path from the base URL of the current URL.
62+
_Example: if `base_url` equals `example.com/docs` and `current_url` equals `example.com/docs/file?raw`, the returned value is `/file`._
63+
64+
* `slug` [String]
65+
The `subpath` removed of any leading slash or `.html` extension.
66+
_Example: if `subpath` equals `/dir/file.html`, the returned value is `dir/file`._
67+
68+
* `root_page?` [Boolean]
69+
Returns `true` if the current page is the root page.
70+
71+
* `initial_page?` [Boolean]
72+
Returns `true` if the current page is the root page or its subpath is one of the scraper's `initial_paths`.
73+
74+
## Core filters
75+
76+
* [`ContainerFilter`](https://github.com/Thibaut/devdocs/blob/master/lib/docs/filters/core/container.rb) — changes the root node of the document (remove everything outside)
77+
* [`CleanHtmlFilter`](https://github.com/Thibaut/devdocs/blob/master/lib/docs/filters/core/clean_html.rb) — removes HTML comments, `<script>`, `<style>`, etc.
78+
* [`NormalizeUrlsFilter`](https://github.com/Thibaut/devdocs/blob/master/lib/docs/filters/core/normalize_urls.rb) — replaces all URLs with their fully qualified counterpart
79+
* [`InternalUrlsFilter`](https://github.com/Thibaut/devdocs/blob/master/lib/docs/filters/core/internal_urls.rb) — detects internal URLs (the ones to scrape) and replaces them with their unqualified, relative counterpart
80+
* [`NormalizePathsFilter`](https://github.com/Thibaut/devdocs/blob/master/lib/docs/filters/core/normalize_paths.rb) — makes the internal paths consistent (e.g. always end with `.html`)
81+
* [`CleanLocalUrlsFilter`](https://github.com/Thibaut/devdocs/blob/master/lib/docs/filters/core/clean_local_urls.rb) — removes links, iframes and images pointing to localhost (`FileScraper` only)
82+
* [`InnerHtmlFilter`](https://github.com/Thibaut/devdocs/blob/master/lib/docs/filters/core/inner_html.rb) — converts the document to a string
83+
* [`CleanTextFilter`](https://github.com/Thibaut/devdocs/blob/master/lib/docs/filters/core/clean_text.rb) — removes empty nodes
84+
* [`AttributionFilter`](https://github.com/Thibaut/devdocs/blob/master/lib/docs/filters/core/attribution.rb) — appends the license info and link to the original document
85+
* [`TitleFilter`](https://github.com/Thibaut/devdocs/blob/master/lib/docs/filters/core/title.rb) — prepends the document with a title (disabled by default)
86+
* [`EntriesFilter`](https://github.com/Thibaut/devdocs/blob/master/lib/docs/filters/core/entries.rb) — abstract filter for extracting the page's metadata
87+
88+
## Custom filters
89+
90+
Scrapers can have any number of custom filters but require at least the two described below.
91+
92+
**Note:** filters are located in the [`lib/docs/filters`](https://github.com/Thibaut/devdocs/tree/master/lib/docs/filters/) directory. The class's name must be the [CamelCase](http://api.rubyonrails.org/classes/String.html#method-i-camelize) equivalent of the filename.
93+
94+
### `CleanHtmlFilter`
95+
96+
The `CleanHtml` filter is tasked with cleaning the HTML markup where necessary and removing anything superfluous or nonessential. Only the core documentation should remain at the end.
97+
98+
Nokogiri's many jQuery-like methods make it easy to search and modify elements — see the [API docs](http://www.rubydoc.info/github/sparklemotion/nokogiri/Nokogiri/XML/Node).
99+
100+
Here's an example implementation that covers the most common use-cases:
101+
102+
```ruby
103+
module Docs
104+
class MyScraper
105+
class CleanHtmlFilter < Filter
106+
def call
107+
css('hr').remove
108+
css('#changelog').remove if root_page?
109+
110+
# Set id attributes on <h3> instead of an empty <a>
111+
css('h3').each do |node|
112+
node['id'] = node.at_css('a')['id']
113+
end
114+
115+
# Make proper table headers
116+
css('td.header').each do |node|
117+
node.name = 'th'
118+
end
119+
120+
# Remove code highlighting
121+
css('pre').each do |node|
122+
node.content = node.content
123+
end
124+
125+
doc
126+
end
127+
end
128+
end
129+
end
130+
```
131+
132+
**Notes:**
133+
134+
* Empty elements will be automatically removed by the core `CleanTextFilter` later in the pipeline's execution.
135+
* Although the goal is to end up with a clean version of the page, try to keep the number of modifications to a minimum, so as to make the code easier to maintain. Custom CSS is the preferred way of normalizing the pages (except for hiding stuff which should always be done by removing the markup).
136+
* Try to document your filter's behavior as much as possible, particularly modifications that apply only to a subset of pages. It'll make updating the documentation easier.
137+
138+
### `EntriesFilter`
139+
140+
The `Entries` filter is responsible for extracting the page's metadata, represented by a set of _entries_, each with a name, type and path.
141+
142+
The following two models are used under the hood to represent the metadata:
143+
144+
* [`Entry(name, type, path)`](https://github.com/Thibaut/devdocs/blob/master/lib/docs/core/models/entry.rb)
145+
* [`Type(name, slug, count)`](https://github.com/Thibaut/devdocs/blob/master/lib/docs/core/models/type.rb)
146+
147+
Each scraper must implement its own `EntriesFilter` by subclassing the [`Docs::EntriesFilter`](https://github.com/Thibaut/devdocs/blob/master/lib/docs/filters/core/entries.rb) class. The base class already implements the `call` method and includes four methods which the subclasses can override:
148+
149+
* `get_name` [String]
150+
The name of the default entry (aka. the page's name).
151+
It is usually guessed from the `slug` (documented above) or by searching the HTML markup.
152+
**Default:** modified version of `slug` (underscores are replaced with spaces and forward slashes with dots)
153+
154+
* `get_type` [String]
155+
The type of the default entry (aka. the page's type).
156+
Entries without a type can be searched for but won't be listed in the app's sidebar (unless no other entries have a type).
157+
**Default:** `nil`
158+
159+
* `include_default_entry?` [Boolean]
160+
Whether to include the default entry.
161+
Used when a page consists of multiple entries (returned by `additional_entries`) but doesn't have a name/type of its own, or to remove a page from the index (if it has no additional entries), in which case it won't be copied on the local filesystem and any link to it in the other pages will be broken (as explained on the [Scraper Reference](./scraper-reference.md) page, this is used to keep the `:skip` / `:skip_patterns` options to a maintainable size, or if the page includes links that can't reached from anywhere else).
162+
**Default:** `true`
163+
164+
* `additional_entries` [Array]
165+
The list of additional entries.
166+
Each entry is represented by an Array of three attributes: its name, fragment identifier, and type. The fragment identifier refers to the `id` attribute of the HTML element (usually a heading) that the entry relates to. It is combined with the page's path to become the entry's path. If absent or `nil`, the page's path is used. If the type is absent or `nil`, the default `type` is used.
167+
Example: `[ ['One'], ['Two', 'id'], ['Three', nil, 'type'] ]` adds three additional entries, the first one named "One" with the default path and type, the second one named "Two" with the URL fragment "#id" and the default type, and the third one named "Three" with the default path and the type "type".
168+
The list is usually constructed by running through the markup. Exceptions can also be hard-coded for specific pages.
169+
**Default:** `[]`
170+
171+
The following accessors are also available, but must not be overridden:
172+
173+
* `name` [String]
174+
Memoized version of `get_name` (`nil` for the root page).
175+
176+
* `type` [String]
177+
Memoized version of `get_type` (`nil` for the root page).
178+
179+
**Notes:**
180+
181+
* Leading and trailing whitespace is automatically removed from names and types.
182+
* Names must be unique across the documentation and as short as possible (ideally less than 30 characters). Whenever possible, methods should be differentiated from properties by appending `()`, and instance methods should be differentiated from class methods using the `Class#method` or `object.method` conventions.
183+
* You can call `name` from `get_type` or `type` from `get_name` but doing both will cause a stack overflow (i.e. you can infer the name from the type or the type from the name, but you can't do both at the same time). Don't call `get_name` or `get_type` directly as their value isn't memoized.
184+
* The root page has no name and no type (both are `nil`). `get_name` and `get_type` won't get called with the page (but `additional_entries` will).
185+
* `Docs::EntriesFilter` is an _HTML filter_. It must be added to the scraper's `html_filters` stack.
186+
* Try to document the code as much as possible, particularly the special cases. It'll make updating the documentation easier.
187+
188+
**Example:**
189+
190+
```ruby
191+
module Docs
192+
class MyScraper
193+
class EntriesFilter < Docs::EntriesFilter
194+
def get_name
195+
node = at_css('h1')
196+
result = node.content.strip
197+
result << ' event' if type == 'Events'
198+
result << '()' if node['class'].try(:include?, 'function')
199+
result
200+
end
201+
202+
def get_type
203+
object, method = *slug.split('/')
204+
method ? object : 'Miscellaneous'
205+
end
206+
207+
def additional_entries
208+
return [] if root_page?
209+
210+
css('h2').map do |node|
211+
[node.content, node['id']]
212+
end
213+
end
214+
215+
def include_default_entry?
216+
!at_css('.obsolete')
217+
end
218+
end
219+
end
220+
end
221+
```
222+
223+
224+
return [[Home]]

0 commit comments

Comments
 (0)