Skip to content

Commit d693a61

Browse files
committed
SOLR-15092: remove link anchors that are no longer neccessary due to relaxed validation rules
commit generated using: perl -i -ple 's/<<(.*?)\.adoc#\1,/<<.adoc#,/g' src/*.adoc ...with manual cleanup of src/language-analysis.adoc due to adoc syntax ambiguity
1 parent 2544a22 commit d693a61

File tree

174 files changed

+879
-879
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

174 files changed

+879
-879
lines changed

solr/solr-ref-guide/src/a-quick-overview.adoc

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -27,17 +27,17 @@ In the scenario above, Solr runs alongside other server applications. For exampl
2727

2828
Solr makes it easy to add the capability to search through the online store through the following steps:
2929

30-
. Define a _schema_. The schema tells Solr about the contents of documents it will be indexing. In the online store example, the schema would define fields for the product name, description, price, manufacturer, and so on. Solr's schema is powerful and flexible and allows you to tailor Solr's behavior to your application. See <<documents-fields-and-schema-design.adoc#documents-fields-and-schema-design,Documents, Fields, and Schema Design>> for all the details.
30+
. Define a _schema_. The schema tells Solr about the contents of documents it will be indexing. In the online store example, the schema would define fields for the product name, description, price, manufacturer, and so on. Solr's schema is powerful and flexible and allows you to tailor Solr's behavior to your application. See <<documents-fields-and-schema-design.adoc#,Documents, Fields, and Schema Design>> for all the details.
3131
. Feed Solr documents for which your users will search.
3232
. Expose search functionality in your application.
3333

34-
Because Solr is based on open standards, it is highly extensible. Solr queries are simple HTTP request URLs and the response is a structured document: mainly JSON, but it could also be XML, CSV, or other formats. This means that a wide variety of clients will be able to use Solr, from other web applications to browser clients, rich client applications, and mobile devices. Any platform capable of HTTP can talk to Solr. See <<client-apis.adoc#client-apis,Client APIs>> for details on client APIs.
34+
Because Solr is based on open standards, it is highly extensible. Solr queries are simple HTTP request URLs and the response is a structured document: mainly JSON, but it could also be XML, CSV, or other formats. This means that a wide variety of clients will be able to use Solr, from other web applications to browser clients, rich client applications, and mobile devices. Any platform capable of HTTP can talk to Solr. See <<client-apis.adoc#,Client APIs>> for details on client APIs.
3535

36-
Solr offers support for the simplest keyword searching through to complex queries on multiple fields and faceted search results. <<searching.adoc#searching,Searching>> has more information about searching and queries.
36+
Solr offers support for the simplest keyword searching through to complex queries on multiple fields and faceted search results. <<searching.adoc#,Searching>> has more information about searching and queries.
3737

3838
If Solr's capabilities are not impressive enough, its ability to handle very high-volume applications should do the trick.
3939

40-
A relatively common scenario is that you have so much data, or so many queries, that a single Solr server is unable to handle your entire workload. In this case, you can scale up the capabilities of your application using <<solrcloud.adoc#solrcloud,SolrCloud>> to better distribute the data, and the processing of requests, across many servers. Multiple options can be mixed and matched depending on the scalability you need.
40+
A relatively common scenario is that you have so much data, or so many queries, that a single Solr server is unable to handle your entire workload. In this case, you can scale up the capabilities of your application using <<solrcloud.adoc#,SolrCloud>> to better distribute the data, and the processing of requests, across many servers. Multiple options can be mixed and matched depending on the scalability you need.
4141

4242
For example: "Sharding" is a scaling technique in which a collection is split into multiple logical pieces called "shards" in order to scale up the number of documents in a collection beyond what could physically fit on a single server. Incoming queries are distributed to every shard in the collection, which respond with merged results. Another technique available is to increase the "Replication Factor" of your collection, which allows you to add servers with additional copies of your collection to handle higher concurrent query load by spreading the requests around to multiple machines. Sharding and replication are not mutually exclusive, and together make Solr an extremely powerful and scalable platform.
4343

solr/solr-ref-guide/src/about-filters.adoc

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -16,7 +16,7 @@
1616
// specific language governing permissions and limitations
1717
// under the License.
1818

19-
Like <<tokenizers.adoc#tokenizers,tokenizers>>, <<filter-descriptions.adoc#filter-descriptions,filters>> consume input and produce a stream of tokens. Filters also derive from `org.apache.lucene.analysis.TokenStream`. Unlike tokenizers, a filter's input is another TokenStream. The job of a filter is usually easier than that of a tokenizer since in most cases a filter looks at each token in the stream sequentially and decides whether to pass it along, replace it or discard it.
19+
Like <<tokenizers.adoc#,tokenizers>>, <<filter-descriptions.adoc#,filters>> consume input and produce a stream of tokens. Filters also derive from `org.apache.lucene.analysis.TokenStream`. Unlike tokenizers, a filter's input is another TokenStream. The job of a filter is usually easier than that of a tokenizer since in most cases a filter looks at each token in the stream sequentially and decides whether to pass it along, replace it or discard it.
2020

2121
A filter may also do more complex analysis by looking ahead to consider multiple tokens at once, although this is less common. One hypothetical use for such a filter might be to normalize state names that would be tokenized as two words. For example, the single token "california" would be replaced with "CA", while the token pair "rhode" followed by "island" would become the single token "RI".
2222

@@ -60,4 +60,4 @@ The last filter in the above example is a stemmer filter that uses the Porter st
6060
6161
Conversely, applying a stemmer to your query terms will allow queries containing non stem terms, like "hugging", to match documents with different variations of the same stem word, such as "hugged". This works because both the indexer and the query will map to the same stem ("hug").
6262
63-
Word stemming is, obviously, very language specific. Solr includes several language-specific stemmers created by the http://snowball.tartarus.org/[Snowball] generator that are based on the Porter stemming algorithm. The generic Snowball Porter Stemmer Filter can be used to configure any of these language stemmers. Solr also includes a convenience wrapper for the English Snowball stemmer. There are also several purpose-built stemmers for non-English languages. These stemmers are described in <<language-analysis.adoc#language-analysis,Language Analysis>>.
63+
Word stemming is, obviously, very language specific. Solr includes several language-specific stemmers created by the http://snowball.tartarus.org/[Snowball] generator that are based on the Porter stemming algorithm. The generic Snowball Porter Stemmer Filter can be used to configure any of these language stemmers. Solr also includes a convenience wrapper for the English Snowball stemmer. There are also several purpose-built stemmers for non-English languages. These stemmers are described in <<language-analysis.adoc#,Language Analysis>>.

solr/solr-ref-guide/src/about-this-guide.adoc

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -28,7 +28,7 @@ The material as presented assumes that you are familiar with some basic search c
2828

2929
The default port when running Solr is 8983. The samples, URLs and screenshots in this guide may show different ports, because the port number that Solr uses is configurable.
3030

31-
If you have not customized your installation of Solr, please make sure that you use port 8983 when following the examples, or configure your own installation to use the port numbers shown in the examples. For information about configuring port numbers, see the section <<monitoring-solr.adoc#monitoring-solr,Monitoring Solr>>.
31+
If you have not customized your installation of Solr, please make sure that you use port 8983 when following the examples, or configure your own installation to use the port numbers shown in the examples. For information about configuring port numbers, see the section <<monitoring-solr.adoc#,Monitoring Solr>>.
3232

3333
Similarly, URL examples use `localhost` throughout; if you are accessing Solr from a location remote to the server hosting Solr, replace `localhost` with the proper domain or IP where Solr is running.
3434

@@ -58,7 +58,7 @@ In many cases, but not all, the parameters and outputs of API calls are the same
5858

5959
Throughout this Guide, we have added examples of both styles with sections labeled "V1 API" and "V2 API". As of the 7.2 version of this Guide, these examples are not yet complete - more coverage will be added as future versions of the Guide are released.
6060

61-
The section <<v2-api.adoc#v2-api,V2 API>> provides more information about how to work with the new API structure, including how to disable it if you choose to do so.
61+
The section <<v2-api.adoc#,V2 API>> provides more information about how to work with the new API structure, including how to disable it if you choose to do so.
6262

6363
All APIs return a response header that includes the status of the request and the time to process it. Some APIs will also include the parameters used for the request. Many of the examples in this Guide omit this header information, which you can do locally by adding the parameter `omitHeader=true` to any request.
6464

solr/solr-ref-guide/src/about-tokenizers.adoc

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -16,7 +16,7 @@
1616
// specific language governing permissions and limitations
1717
// under the License.
1818

19-
The job of a <<tokenizers.adoc#tokenizers,tokenizer>> is to break up a stream of text into tokens, where each token is (usually) a sub-sequence of the characters in the text. An analyzer is aware of the field it is configured for, but a tokenizer is not. Tokenizers read from a character stream (a Reader) and produce a sequence of Token objects (a TokenStream).
19+
The job of a <<tokenizers.adoc#,tokenizer>> is to break up a stream of text into tokens, where each token is (usually) a sub-sequence of the characters in the text. An analyzer is aware of the field it is configured for, but a tokenizer is not. Tokenizers read from a character stream (a Reader) and produce a sequence of Token objects (a TokenStream).
2020

2121
Characters in the input stream may be discarded, such as whitespace or other delimiters. They may also be added to or replaced, such as mapping aliases or abbreviations to normalized forms. A token contains various metadata in addition to its text value, such as the location at which the token occurs in the field. Because a tokenizer may produce tokens that diverge from the input text, you should not assume that the text of the token is the same text that occurs in the field, or that its length is the same as the original text. It's also possible for more than one token to have the same position or refer to the same offset in the original text. Keep this in mind if you use token metadata for things like highlighting search results in the field text.
2222

@@ -52,7 +52,7 @@ The class named in the tokenizer element is not the actual tokenizer, but rather
5252
5353
A `TypeTokenFilterFactory` is available that creates a `TypeTokenFilter` that filters tokens based on their TypeAttribute, which is set in `factory.getStopTypes`.
5454
55-
For a complete list of the available TokenFilters, see the section <<tokenizers.adoc#tokenizers,Tokenizers>>.
55+
For a complete list of the available TokenFilters, see the section <<tokenizers.adoc#,Tokenizers>>.
5656
5757
== When to Use a CharFilter vs. a TokenFilter
5858

solr/solr-ref-guide/src/aliases.adoc

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -67,7 +67,7 @@ There are presently two types of routed alias: time routed and category routed.
6767
but share some common behavior.
6868

6969
When processing an update for a routed alias, Solr initializes its
70-
<<update-request-processors.adoc#update-request-processors,UpdateRequestProcessor>> chain as usual, but
70+
<<update-request-processors.adoc#,UpdateRequestProcessor>> chain as usual, but
7171
when `DistributedUpdateProcessor` (DUP) initializes, it detects that the update targets a routed alias and injects
7272
`RoutedAliasUpdateProcessor` (RAUP) in front of itself.
7373
RAUP, in coordination with the Overseer, is the main part of a routed alias, and must immediately precede DUP. It is not
@@ -83,7 +83,7 @@ WARNING: It's extremely important with all routed aliases that the route values
8383
with a different route value for the same ID produces two distinct documents with the same ID accessible via the alias.
8484
All query time behavior of the routed alias is *_undefined_* and not easily predictable once duplicate ID's exist.
8585

86-
CAUTION: It is a bad idea to use "data driven" mode (aka <<schemaless-mode.adoc#schemaless-mode,schemaless-mode>>) with
86+
CAUTION: It is a bad idea to use "data driven" mode (aka <<schemaless-mode.adoc#,schemaless-mode>>) with
8787
routed aliases, as duplicate schema mutations might happen concurrently leading to errors.
8888

8989

solr/solr-ref-guide/src/analysis-screen.adoc

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -26,6 +26,6 @@ If you click the *Verbose Output* check box, you see more information, including
2626

2727
image::images/analysis-screen/analysis_verbose.png[image,height=400]
2828

29-
In the example screenshot above, several transformations are applied to the input "Running is a sport." The words "is" and "a" have been removed and the word "running" has been changed to its basic form, "run". This is because we are using the field type `text_en` in this scenario, which is configured to remove stop words (small words that usually do not provide a great deal of context) and "stem" terms when possible to find more possible matches (this is particularly helpful with plural forms of words). If you click the question mark next to the *Analyze Fieldname/Field Type* pull-down menu, the <<schema-browser-screen.adoc#schema-browser-screen,Schema Browser window>> will open, showing you the settings for the field specified.
29+
In the example screenshot above, several transformations are applied to the input "Running is a sport." The words "is" and "a" have been removed and the word "running" has been changed to its basic form, "run". This is because we are using the field type `text_en` in this scenario, which is configured to remove stop words (small words that usually do not provide a great deal of context) and "stem" terms when possible to find more possible matches (this is particularly helpful with plural forms of words). If you click the question mark next to the *Analyze Fieldname/Field Type* pull-down menu, the <<schema-browser-screen.adoc#,Schema Browser window>> will open, showing you the settings for the field specified.
3030

31-
The section <<understanding-analyzers-tokenizers-and-filters.adoc#understanding-analyzers-tokenizers-and-filters,Understanding Analyzers, Tokenizers, and Filters>> describes in detail what each option is and how it may transform your data and the section <<running-your-analyzer.adoc#running-your-analyzer,Running Your Analyzer>> has specific examples for using the Analysis screen.
31+
The section <<understanding-analyzers-tokenizers-and-filters.adoc#,Understanding Analyzers, Tokenizers, and Filters>> describes in detail what each option is and how it may transform your data and the section <<running-your-analyzer.adoc#,Running Your Analyzer>> has specific examples for using the Analysis screen.

solr/solr-ref-guide/src/analytics-expression-sources.adoc

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -22,10 +22,10 @@ These sources can be either Solr fields indexed with docValues, or constants.
2222

2323
== Supported Field Types
2424

25-
The following <<field-types-included-with-solr.adoc#field-types-included-with-solr, Solr field types>> are supported.
25+
The following <<field-types-included-with-solr.adoc#, Solr field types>> are supported.
2626
Fields of these types can be either multi-valued and single-valued.
2727

28-
All fields used in analytics expressions *must* have <<docvalues.adoc#docvalues,docValues>> enabled.
28+
All fields used in analytics expressions *must* have <<docvalues.adoc#,docValues>> enabled.
2929

3030

3131
// Since Trie* fields are deprecated as of 7.0, we should consider removing Trie* fields from this list...
@@ -77,7 +77,7 @@ There are two possible ways of specifying constant strings, as shown below.
7777
=== Dates
7878

7979
Dates can be specified in the same way as they are in Solr queries. Just use ISO-8601 format.
80-
For more information, refer to the <<working-with-dates.adoc#working-with-dates,Working with Dates>> section.
80+
For more information, refer to the <<working-with-dates.adoc#,Working with Dates>> section.
8181

8282
* `2017-07-17T19:35:08Z`
8383

solr/solr-ref-guide/src/analytics-reduction-functions.adoc

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -17,8 +17,8 @@
1717
// specific language governing permissions and limitations
1818
// under the License.
1919

20-
Reduction functions reduce the values of <<analytics-expression-sources.adoc#analytics-expression-sources,sources>>
21-
and/or unreduced <<analytics-mapping-functions.adoc#analytics-mapping-functions,mapping functions>>
20+
Reduction functions reduce the values of <<analytics-expression-sources.adoc#,sources>>
21+
and/or unreduced <<analytics-mapping-functions.adoc#,mapping functions>>
2222
for every Solr Document to a single value.
2323

2424
Below is a list of all reduction functions provided by the Analytics Component.

solr/solr-ref-guide/src/analytics.adoc

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -161,7 +161,7 @@ The supported fields are listed in the <<analytics-expression-sources.adoc#suppo
161161

162162
Mapping Functions::
163163
Mapping functions map values for each Solr Document or Reduction.
164-
The provided mapping functions are detailed in the <<analytics-mapping-functions.adoc#analytics-mapping-functions,Analytics Mapping Function Reference>>.
164+
The provided mapping functions are detailed in the <<analytics-mapping-functions.adoc#,Analytics Mapping Function Reference>>.
165165

166166
* Unreduced Mapping: Mapping a Field with another Field or Constant returns a value for every Solr Document.
167167
Unreduced mapping functions can take fields, constants as well as other unreduced mapping functions as input.
@@ -170,7 +170,7 @@ Unreduced mapping functions can take fields, constants as well as other unreduce
170170

171171
Reduction Functions::
172172
Functions that reduce the values of sources and/or unreduced mapping functions for every Solr Document to a single value.
173-
The provided reduction functions are detailed in the <<analytics-reduction-functions.adoc#analytics-reduction-functions,Analytics Reduction Function Reference>>.
173+
The provided reduction functions are detailed in the <<analytics-reduction-functions.adoc#,Analytics Reduction Function Reference>>.
174174

175175
==== Component Ordering
176176

0 commit comments

Comments
 (0)