Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
The table of contents is too big for display.
Diff view
Diff view
  •  
  •  
  •  
The diff you're trying to view is too large. We only load the first 3000 changed files.
2 changes: 2 additions & 0 deletions docs/build.gradle
Original file line number Diff line number Diff line change
Expand Up @@ -37,6 +37,7 @@ ext.docsFileTree = fileTree(projectDir) {
}

tasks.named("yamlRestTest") {
enabled = false
if (buildParams.isSnapshotBuild() == false) {
// LOOKUP is not available in snapshots
systemProperty 'tests.rest.blacklist', [
Expand All @@ -47,6 +48,7 @@ tasks.named("yamlRestTest") {

/* List of files that have snippets that will not work until platinum tests can occur ... */
tasks.named("buildRestTests").configure {
enabled = false
getExpectedUnconvertedCandidates().addAll(
'reference/ml/anomaly-detection/ml-configuring-transform.asciidoc',
'reference/ml/anomaly-detection/apis/delete-calendar-event.asciidoc',
Expand Down
506 changes: 506 additions & 0 deletions docs/docset.yml

Large diffs are not rendered by default.

68 changes: 68 additions & 0 deletions docs/extend/creating-classic-plugins.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,68 @@
---
mapped_pages:
- https://www.elastic.co/guide/en/elasticsearch/plugins/current/creating-classic-plugins.html
---

# Creating classic plugins [creating-classic-plugins]

Classic plugins provide {{es}} with mechanisms for custom authentication, authorization, scoring, and more.

::::{admonition} Plugin release lifecycle
:class: important

Classic plugins require you to build a new version for each new {{es}} release. This version is checked when the plugin is installed and when it is loaded. {{es}} will refuse to start in the presence of plugins with the incorrect `elasticsearch.version`.

::::



## Classic plugin file structure [_classic_plugin_file_structure]

Classic plugins are ZIP files composed of JAR files and [a metadata file called `plugin-descriptor.properties`](/extend/plugin-descriptor-file-classic.md), a Java properties file that describes the plugin.

Note that only JAR files at the root of the plugin are added to the classpath for the plugin. If you need other resources, package them into a resources JAR.


## Example plugins [_example_plugins]

The {{es}} repository contains [examples of plugins](https://github.com/elastic/elasticsearch/tree/main/plugins/examples). Some of these include:

* a plugin with [custom settings](https://github.com/elastic/elasticsearch/tree/main/plugins/examples/custom-settings)
* a plugin with a [custom ingest processor](https://github.com/elastic/elasticsearch/tree/main/plugins/examples/custom-processor)
* adding [custom rest endpoints](https://github.com/elastic/elasticsearch/tree/main/plugins/examples/rest-handler)
* adding a [custom rescorer](https://github.com/elastic/elasticsearch/tree/main/plugins/examples/rescore)
* a script [implemented in Java](https://github.com/elastic/elasticsearch/tree/main/plugins/examples/script-expert-scoring)

These examples provide the bare bones needed to get started. For more information about how to write a plugin, we recommend looking at the [source code of existing plugins](https://github.com/elastic/elasticsearch/tree/main/plugins/) for inspiration.


## Testing your plugin [_testing_your_plugin]

Use `bin/elasticsearch-plugin install file:///path/to/your/plugin` to install your plugin for testing. The Java plugin is auto-loaded only if it’s in the `plugins/` directory.


## Java Security permissions [plugin-authors-jsm]

Some plugins may need additional security permissions. A plugin can include the optional `plugin-security.policy` file containing `grant` statements for additional permissions. Any additional permissions will be displayed to the user with a large warning, and they will have to confirm them when installing the plugin interactively. So if possible, it is best to avoid requesting any spurious permissions!

If you are using the {{es}} Gradle build system, place this file in `src/main/plugin-metadata` and it will be applied during unit tests as well.

The Java security model is stack-based, and additional permissions are granted to the jars in your plugin, so you have to write proper security code around operations requiring elevated privileges. You might add a check to prevent unprivileged code (such as scripts) from gaining escalated permissions. For example:

```java
// ES permission you should check before doPrivileged() blocks
import org.elasticsearch.SpecialPermission;

SecurityManager sm = System.getSecurityManager();
if (sm != null) {
// unprivileged code such as scripts do not have SpecialPermission
sm.checkPermission(new SpecialPermission());
}
AccessController.doPrivileged(
// sensitive operation
);
```

Check [Secure Coding Guidelines for Java SE](https://www.oracle.com/technetwork/java/seccodeguide-139067.md) for more information.


91 changes: 91 additions & 0 deletions docs/extend/creating-stable-plugins.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,91 @@
---
mapped_pages:
- https://www.elastic.co/guide/en/elasticsearch/plugins/current/creating-stable-plugins.html
---

# Creating text analysis plugins with the stable plugin API [creating-stable-plugins]

Text analysis plugins provide {{es}} with custom [Lucene analyzers, token filters, character filters, and tokenizers](docs-content://manage-data/data-store/text-analysis.md).


## The stable plugin API [_the_stable_plugin_api]

Text analysis plugins can be developed against the stable plugin API. This API consists of the following dependencies:

* `plugin-api` - an API used by plugin developers to implement custom {{es}} plugins.
* `plugin-analysis-api` - an API used by plugin developers to implement analysis plugins and integrate them into {{es}}.
* `lucene-analysis-common` - a dependency of `plugin-analysis-api` that contains core Lucene analysis interfaces like `Tokenizer`, `Analyzer`, and `TokenStream`.

For new versions of {{es}} within the same major version, plugins built against this API does not need to be recompiled. Future versions of the API will be backwards compatible and plugins are binary compatible with future versions of {{es}}. In other words, once you have a working artifact, you can re-use it when you upgrade {{es}} to a new bugfix or minor version.

A text analysis plugin can implement four factory classes that are provided by the analysis plugin API.

* `AnalyzerFactory` to create a Lucene analyzer
* `CharFilterFactory` to create a character character filter
* `TokenFilterFactory` to create a Lucene token filter
* `TokenizerFactory` to create a Lucene tokenizer

The key to implementing a stable plugin is the `@NamedComponent` annotation. Many of {{es}}'s components have names that are used in configurations. For example, the keyword analyzer is referenced in configuration with the name `"keyword"`. Once your custom plugin is installed in your cluster, your named components may be referenced by name in these configurations as well.

You can also create text analysis plugins as a [classic plugin](/extend/creating-classic-plugins.md). However, classic plugins are pinned to a specific version of {{es}}. You need to recompile them when upgrading {{es}}. Because classic plugins are built against internal APIs that can change, upgrading to a new version may require code changes.


## Stable plugin file structure [_stable_plugin_file_structure]

Stable plugins are ZIP files composed of JAR files and two metadata files:

* `stable-plugin-descriptor.properties` - a Java properties file that describes the plugin. Refer to [The plugin descriptor file for stable plugins](/extend/plugin-descriptor-file-stable.md).
* `named_components.json` - a JSON file mapping interfaces to key-value pairs of component names and implementation classes.

Note that only JAR files at the root of the plugin are added to the classpath for the plugin. If you need other resources, package them into a resources JAR.


## Development process [_development_process]

Elastic provides a Gradle plugin, `elasticsearch.stable-esplugin`, that makes it easier to develop and package stable plugins. The steps in this section assume you use this plugin. However, you don’t need Gradle to create plugins.

The {{es}} Github repository contains [an example analysis plugin](https://github.com/elastic/elasticsearch/tree/main/plugins/examples/stable-analysis). The example `build.gradle` build script provides a good starting point for developing your own plugin.


### Prerequisites [_prerequisites]

Plugins are written in Java, so you need to install a Java Development Kit (JDK). Install Gradle if you want to use Gradle.


### Step by step [_step_by_step]

1. Create a directory for your project.
2. Copy the example `build.gradle` build script to your project directory. Note that this build script uses the `elasticsearch.stable-esplugin` gradle plugin to build your plugin.
3. Edit the `build.gradle` build script:

* Add a definition for the `pluginApiVersion` and matching `luceneVersion` variables to the top of the file. You can find these versions in the `build-tools-internal/version.properties` file in the [Elasticsearch Github repository](https://github.com/elastic/elasticsearch/).
* Edit the `name` and `description` in the `esplugin` section of the build script. This will create the plugin descriptor file. If you’re not using the `elasticsearch.stable-esplugin` gradle plugin, refer to [The plugin descriptor file for stable plugins](/extend/plugin-descriptor-file-stable.md) to create the file manually.
* Add module information.
* Ensure you have declared the following compile-time dependencies. These dependencies are compile-time only because {{es}} will provide these libraries at runtime.

* `org.elasticsearch.plugin:elasticsearch-plugin-api`
* `org.elasticsearch.plugin:elasticsearch-plugin-analysis-api`
* `org.apache.lucene:lucene-analysis-common`

* For unit testing, ensure these dependencies have also been added to the `build.gradle` script as `testImplementation` dependencies.

4. Implement an interface from the analysis plugin API, annotating it with `NamedComponent`. Refer to [Example text analysis plugin](/extend/example-text-analysis-plugin.md) for an example.
5. You should now be able to assemble a plugin ZIP file by running:

```sh
gradle bundlePlugin
```

The resulting plugin ZIP file is written to the `build/distributions` directory.



### YAML REST tests [_yaml_rest_tests]

The Gradle `elasticsearch.yaml-rest-test` plugin enables testing of your plugin using the [{{es}} yamlRestTest framework](https://github.com/elastic/elasticsearch/blob/main/rest-api-spec/src/yamlRestTest/resources/rest-api-spec/test/README.asciidoc). These tests use a YAML-formatted domain language to issue REST requests against an internal {{es}} cluster that has your plugin installed, and to check the results of those requests. The structure of a YAML REST test directory is as follows:

* A test suite class, defined under `src/yamlRestTest/java`. This class should extend `ESClientYamlSuiteTestCase`.
* The YAML tests themselves should be defined under `src/yamlRestTest/resources/test/`.



190 changes: 190 additions & 0 deletions docs/extend/example-text-analysis-plugin.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,190 @@
---
mapped_pages:
- https://www.elastic.co/guide/en/elasticsearch/plugins/current/example-text-analysis-plugin.html
---

# Example text analysis plugin [example-text-analysis-plugin]

This example shows how to create a simple "Hello world" text analysis plugin using the stable plugin API. The plugin provides a custom Lucene token filter that strips all tokens except for "hello" and "world".

Elastic provides a Grade plugin, `elasticsearch.stable-esplugin`, that makes it easier to develop and package stable plugins. The steps in this guide assume you use this plugin. However, you don’t need Gradle to create plugins.

1. Create a new directory for your project.
2. In this example, the source code is organized under the `main` and `test` directories. In your project’s home directory, create `src/` `src/main/`, and `src/test/` directories.
3. Create the following `build.gradle` build script in your project’s home directory:

```gradle
ext.pluginApiVersion = '8.7.0'
ext.luceneVersion = '9.5.0'

buildscript {
ext.pluginApiVersion = '8.7.0'
repositories {
mavenCentral()
}
dependencies {
classpath "org.elasticsearch.gradle:build-tools:${pluginApiVersion}"
}
}

apply plugin: 'elasticsearch.stable-esplugin'
apply plugin: 'elasticsearch.yaml-rest-test'

esplugin {
name 'my-plugin'
description 'My analysis plugin'
}

group 'org.example'
version '1.0-SNAPSHOT'

repositories {
mavenLocal()
mavenCentral()
}

dependencies {

//TODO transitive dependency off and plugin-api dependency?
compileOnly "org.elasticsearch.plugin:elasticsearch-plugin-api:${pluginApiVersion}"
compileOnly "org.elasticsearch.plugin:elasticsearch-plugin-analysis-api:${pluginApiVersion}"
compileOnly "org.apache.lucene:lucene-analysis-common:${luceneVersion}"

//TODO for testing this also have to be declared
testImplementation "org.elasticsearch.plugin:elasticsearch-plugin-api:${pluginApiVersion}"
testImplementation "org.elasticsearch.plugin:elasticsearch-plugin-analysis-api:${pluginApiVersion}"
testImplementation "org.apache.lucene:lucene-analysis-common:${luceneVersion}"

testImplementation ('junit:junit:4.13.2'){
exclude group: 'org.hamcrest'
}
testImplementation 'org.mockito:mockito-core:4.4.0'
testImplementation 'org.hamcrest:hamcrest:2.2'

}
```

4. In `src/main/java/org/example/`, create `HelloWorldTokenFilter.java`. This file provides the code for a token filter that strips all tokens except for "hello" and "world":

```java
package org.example;

import org.apache.lucene.analysis.FilteringTokenFilter;
import org.apache.lucene.analysis.TokenStream;
import org.apache.lucene.analysis.tokenattributes.CharTermAttribute;

import java.util.Arrays;

public class HelloWorldTokenFilter extends FilteringTokenFilter {
private final CharTermAttribute term = addAttribute(CharTermAttribute.class);

public HelloWorldTokenFilter(TokenStream input) {
super(input);
}

@Override
public boolean accept() {
if (term.length() != 5) return false;
return Arrays.equals(term.buffer(), 0, 4, "hello".toCharArray(), 0, 4)
|| Arrays.equals(term.buffer(), 0, 4, "world".toCharArray(), 0, 4);
}
}
```

5. This filter can be provided to Elasticsearch using the following `HelloWorldTokenFilterFactory.java` factory class. The `@NamedComponent` annotation is used to give the filter the `hello_world` name. This is the name you can use to refer to the filter, once the plugin has been deployed.

```java
package org.example;

import org.apache.lucene.analysis.TokenStream;
import org.elasticsearch.plugin.analysis.TokenFilterFactory;
import org.elasticsearch.plugin.NamedComponent;

@NamedComponent(value = "hello_world")
public class HelloWorldTokenFilterFactory implements TokenFilterFactory {

@Override
public TokenStream create(TokenStream tokenStream) {
return new HelloWorldTokenFilter(tokenStream);
}

}
```

6. Unit tests may go under the `src/test` directory. You will have to add dependencies for your preferred testing framework.
7. Run:

```sh
gradle bundlePlugin
```

This builds the JAR file, generates the metadata files, and bundles them into a plugin ZIP file. The resulting ZIP file will be written to the `build/distributions` directory.

8. [Install the plugin](/reference/elasticsearch-plugins/plugin-management.md).
9. You can use the `_analyze` API to verify that the `hello_world` token filter works as expected:

```console
GET /_analyze
{
"text": "hello to everyone except the world",
"tokenizer": "standard",
"filter": ["hello_world"]
}
```



## YAML REST tests [_yaml_rest_tests_2]

If you are using the `elasticsearch.stable-esplugin` plugin for Gradle, you can use {{es}}'s YAML Rest Test framework. This framework allows you to load your plugin in a running test cluster and issue real REST API queries against it. The full syntax for this framework is beyond the scope of this tutorial, but there are many examples in the Elasticsearch repository. Refer to the [example analysis plugin](https://github.com/elastic/elasticsearch/tree/main/plugins/examples/stable-analysis) in the {{es}} Github repository for an example.

1. Create a `yamlRestTest` directory in the `src` directory.
2. Under the `yamlRestTest` directory, create a `java` folder for Java sources and a `resources` folder.
3. In `src/yamlRestTest/java/org/example/`, create `HelloWorldPluginClientYamlTestSuiteIT.java`. This class implements `ESClientYamlSuiteTestCase`.

```java
import com.carrotsearch.randomizedtesting.annotations.Name;
import com.carrotsearch.randomizedtesting.annotations.ParametersFactory;
import org.elasticsearch.test.rest.yaml.ClientYamlTestCandidate;
import org.elasticsearch.test.rest.yaml.ESClientYamlSuiteTestCase;

public class HelloWorldPluginClientYamlTestSuiteIT extends ESClientYamlSuiteTestCase {

public HelloWorldPluginClientYamlTestSuiteIT(
@Name("yaml") ClientYamlTestCandidate testCandidate
) {
super(testCandidate);
}

@ParametersFactory
public static Iterable<Object[]> parameters() throws Exception {
return ESClientYamlSuiteTestCase.createParameters();
}
}
```

4. In `src/yamlRestTest/resources/rest-api-spec/test/plugin`, create the `10_token_filter.yml` YAML file:

```yaml
## Sample rest test
---
"Hello world plugin test - removes all tokens except hello and world":
- do:
indices.analyze:
body:
text: hello to everyone except the world
tokenizer: standard
filter:
- type: "hello_world"
- length: { tokens: 2 }
- match: { tokens.0.token: "hello" }
- match: { tokens.1.token: "world" }
```

5. Run the test with:

```sh
gradle yamlRestTest
```


19 changes: 19 additions & 0 deletions docs/extend/index.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,19 @@
---
mapped_pages:
- https://www.elastic.co/guide/en/elasticsearch/plugins/current/plugin-authors.html
---

# Create Elasticsearch plugins [plugin-authors]

{{es}} plugins are modular bits of code that add functionality to {{es}}. Plugins are written in Java and implement Java interfaces that are defined in the source code. Plugins are composed of JAR files and metadata files, compressed in a single zip file.

There are two ways to create a plugin:

[Creating text analysis plugins with the stable plugin API](/extend/creating-stable-plugins.md)
: Text analysis plugins can be developed against the stable plugin API to provide {{es}} with custom Lucene analyzers, token filters, character filters, and tokenizers.

[Creating classic plugins](/extend/creating-classic-plugins.md)
: Other plugins can be developed against the classic plugin API to provide custom authentication, authorization, or scoring mechanisms, and more.



Loading
Loading