Skip to content

Commit 583f071

Browse files
authored
Merge pull request #44125 from Amrita42/BZ2033580new
BZ2033580: corrected wrong instances of plug-in and parameter
2 parents 4f7f958 + 8f44b1c commit 583f071

20 files changed

+37
-37
lines changed

logging/cluster-logging-release-notes.adoc

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -113,15 +113,15 @@ To see these metrics, open the *Administrator* perspective in the {product-title
113113

114114
* Before this update, the `kibana-proxy` Pod sometimes entered the `CrashLoopBackoff` state and logged the following message `Invalid configuration: cookie_secret must be 16, 24, or 32 bytes to create an AES cipher when pass_access_token == true or cookie_refresh != 0, but is 29 bytes.` The exact actual number of bytes could vary. With this update, the generation of the Kibana session secret has been corrected, and the kibana-proxy Pod no longer enters a `CrashLoopBackoff` state due to this error. (link:https://issues.redhat.com/browse/LOG-1446[LOG-1446])
115115

116-
* Before this update, the AWS CloudWatch Fluentd plugin logged its AWS API calls to the Fluentd log at all log levels, consuming additional {product-title} node resources. With this update, the AWS CloudWatch Fluentd plugin logs AWS API calls only at the "debug" and "trace" log levels. This way, at the default "warn" log level, Fluentd does not consume extra node resources. (link:https://issues.redhat.com/browse/LOG-1071[LOG-1071])
116+
* Before this update, the AWS CloudWatch Fluentd plug-in logged its AWS API calls to the Fluentd log at all log levels, consuming additional {product-title} node resources. With this update, the AWS CloudWatch Fluentd plug-in logs AWS API calls only at the "debug" and "trace" log levels. This way, at the default "warn" log level, Fluentd does not consume extra node resources. (link:https://issues.redhat.com/browse/LOG-1071[LOG-1071])
117117

118-
* Before this update, the Elasticsearch OpenDistro security plugin caused user index migrations to fail. This update resolves the issue by providing a newer version of the plugin. Now, index migrations proceed without errors. (link:https://issues.redhat.com/browse/LOG-1276[LOG-1276])
118+
* Before this update, the Elasticsearch OpenDistro security plug-in caused user index migrations to fail. This update resolves the issue by providing a newer version of the plug-in. Now, index migrations proceed without errors. (link:https://issues.redhat.com/browse/LOG-1276[LOG-1276])
119119

120120
* Before this update, in the *Logging* dashboard in the {product-title} console, the list of top 10 log-producing containers lacked data points. This update resolves the issue, and the dashboard displays all data points. (link:https://issues.redhat.com/browse/LOG-1353[LOG-1353])
121121

122122
* Before this update, if you were tuning the performance of the Fluentd log forwarder by adjusting the `chunkLimitSize` and `totalLimitSize` values, the `Setting queued_chunks_limit_size for each buffer to` message reported values that were too low. The current update fixes this issue so that this message reports the correct values. (link:https://issues.redhat.com/browse/LOG-1411[LOG-1411])
123123

124-
* Before this update, the Kibana OpenDistro security plugin caused user index migrations to fail. This update resolves the issue by providing a newer version of the plugin. Now, index migrations proceed without errors. (link:https://issues.redhat.com/browse/LOG-1558[LOG-1558])
124+
* Before this update, the Kibana OpenDistro security plug-in caused user index migrations to fail. This update resolves the issue by providing a newer version of the plug-in. Now, index migrations proceed without errors. (link:https://issues.redhat.com/browse/LOG-1558[LOG-1558])
125125

126126
* Before this update, using a namespace input filter prevented logs in that namespace from appearing in other inputs. With this update, logs are sent to all inputs that can accept them. (link:https://issues.redhat.com/browse/LOG-1570[LOG-1570])
127127

modules/builds-understanding-openshift-pipeline.adoc

Lines changed: 6 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -14,20 +14,20 @@ Jenkins images on {product-title} are fully supported and users should follow Je
1414

1515
Pipelines give you control over building, deploying, and promoting your applications on {product-title}. Using a combination of the Jenkins Pipeline build strategy, `jenkinsfiles`, and the {product-title} Domain Specific Language (DSL) provided by the Jenkins Client Plug-in, you can create advanced build, test, deploy, and promote pipelines for any scenario.
1616

17-
*{product-title} Jenkins Sync Plugin*
17+
*{product-title} Jenkins Sync Plug-in*
1818

19-
The {product-title} Jenkins Sync Plugin keeps the build configuration and build objects in sync with Jenkins jobs and builds, and provides the following:
19+
The {product-title} Jenkins Sync Plug-in keeps the build configuration and build objects in sync with Jenkins jobs and builds, and provides the following:
2020

2121
* Dynamic job and run creation in Jenkins.
2222
* Dynamic creation of agent pod templates from image streams, image stream tags, or config maps.
2323
* Injecting of environment variables.
2424
* Pipeline visualization in the {product-title} web console.
25-
* Integration with the Jenkins git plugin, which passes commit information from
26-
* Synchronizing secrets into Jenkins credential entries {product-title} builds to the Jenkins git plugin.
25+
* Integration with the Jenkins git plug-in, which passes commit information from
26+
* Synchronizing secrets into Jenkins credential entries {product-title} builds to the Jenkins git plug-in.
2727

28-
*{product-title} Jenkins Client Plugin*
28+
*{product-title} Jenkins Client Plug-in*
2929

30-
The {product-title} Jenkins Client Plugin is a Jenkins plugin which aims to provide a readable, concise, comprehensive, and fluent Jenkins Pipeline syntax for rich interactions with an {product-title} API Server. The plugin uses the {product-title} command line tool, `oc`, which must be available on the nodes executing the script.
30+
The {product-title} Jenkins Client Plug-in is a Jenkins plug-in which aims to provide a readable, concise, comprehensive, and fluent Jenkins Pipeline syntax for rich interactions with an {product-title} API Server. The plug-in uses the {product-title} command line tool, `oc`, which must be available on the nodes executing the script.
3131

3232
The Jenkins Client Plug-in must be installed on your Jenkins master so the {product-title} DSL will be available to use within the `jenkinsfile` for your application. This plug-in is installed and enabled by default when using the {product-title} Jenkins image.
3333

modules/cli-extending-plugins-installing.adoc

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -38,7 +38,7 @@ $ oc plugin list
3838
.Example output
3939
[source,terminal]
4040
----
41-
The following compatible plugins are available:
41+
The following compatible plug-ins are available:
4242
4343
/usr/local/bin/<plugin_file>
4444
----

modules/cluster-logging-release-notes-5.2.z.adoc

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -99,7 +99,7 @@ This release includes link:https://access.redhat.com/errata/RHSA-2021:5127[RHSA-
9999

100100
* Before this update records shipped via syslog would serialize a ruby hash encoding key/value pairs to contain a '=>' character, as well as replace tabs with "#11". This update serializes the message correctly as proper JSON. (link:https://issues.redhat.com/browse/LOG-1775[LOG-1775])
101101

102-
* Before this update, the Elasticsearch Prometheus exporter plugin compiled index-level metrics using a high-cost query that impacted the Elasticsearch node performance. This update implements a lower-cost query that improves performance. (link:https://issues.redhat.com/browse/LOG-1970[LOG-1970])
102+
* Before this update, the Elasticsearch Prometheus exporter plug-in compiled index-level metrics using a high-cost query that impacted the Elasticsearch node performance. This update implements a lower-cost query that improves performance. (link:https://issues.redhat.com/browse/LOG-1970[LOG-1970])
103103

104104
* Before this update, Elasticsearch sometimes rejected messages when Log Forwarding was configured with multiple outputs. This happened because configuring one of the outputs modified message content to be a single message. With this update, Log Forwarding duplicates the messages for each output so that output-specific processing does not affect the other outputs. (link:https://issues.redhat.com/browse/LOG-1824[LOG-1824])
105105

modules/images-configuration-file.adoc

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -54,7 +54,7 @@ status:
5454
<1> `Image`: Holds cluster-wide information about how to handle images. The canonical, and only valid name is `cluster`.
5555
<2> `allowedRegistriesForImport`: Limits the container image registries from which normal users may import images. Set this list to the registries that you trust to contain valid images, and that you want applications to be able to import from. Users with permission to create images or `ImageStreamMappings` from the API are not affected by this policy. Typically only cluster administrators have the appropriate permissions.
5656
<3> `additionalTrustedCA`: A reference to a config map containing additional certificate authorities (CA) that are trusted during image stream import, pod image pull, `openshift-image-registry` pullthrough, and builds. The namespace for this config map is `openshift-config`. The format of the config map is to use the registry hostname as the key, and the PEM certificate as the value, for each additional registry CA to trust.
57-
<4> `registrySources`: Contains configuration that determines whether the container runtime allows or blocks individual registries when accessing images for builds and pods. Either the `allowedRegistries` parameter or the `blockedRegistries` parameter can be set, but not both. You can also define whether or not to allow access to insecure registries or registries that allow registries that use image short names. This example uses the `allowedRegistries` parameter, which defines the registries that are allowed to be used. The insecure registry `insecure.com` is also allowed. The `registrySources` paramter does not contain configuration for the internal cluster registry.
57+
<4> `registrySources`: Contains configuration that determines whether the container runtime allows or blocks individual registries when accessing images for builds and pods. Either the `allowedRegistries` parameter or the `blockedRegistries` parameter can be set, but not both. You can also define whether or not to allow access to insecure registries or registries that allow registries that use image short names. This example uses the `allowedRegistries` parameter, which defines the registries that are allowed to be used. The insecure registry `insecure.com` is also allowed. The `registrySources` parameter does not contain configuration for the internal cluster registry.
5858
+
5959
[NOTE]
6060
====

modules/images-other-jenkins-oauth-auth.adoc

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -19,7 +19,7 @@ The default {product-title} `admin`, `edit`, and `view` roles and the Jenkins pe
1919

2020
When running Jenkins in an {product-title} pod, the login plug-in looks for a config map named `openshift-jenkins-login-plugin-config` in the namespace that Jenkins is running in.
2121

22-
If this plugin finds and can read in that config map, you can define the role to Jenkins Permission mappings. Specifically:
22+
If this plug-in finds and can read in that config map, you can define the role to Jenkins Permission mappings. Specifically:
2323

2424
* The login plug-in treats the key and value pairs in the config map as Jenkins permission to {product-title} role mappings.
2525
* The key is the Jenkins permission group short ID and the Jenkins permission short ID, with those two separated by a hyphen character.

modules/jt-comparison-of-jenkins-and-openshift-pipelines-concepts.adoc

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -9,11 +9,11 @@
99
You can review and compare the following equivalent terms used in Jenkins and OpenShift Pipelines.
1010

1111
== Jenkins terminology
12-
Jenkins offers declarative and scripted pipelines that are extensible using shared libraries and plugins. Some basic terms in Jenkins are as follows:
12+
Jenkins offers declarative and scripted pipelines that are extensible using shared libraries and plug-ins. Some basic terms in Jenkins are as follows:
1313

1414
* *Pipeline*: Automates the entire process of building, testing, and deploying applications by using link:https://groovy-lang.org/[Groovy] syntax.
1515
* *Node*: A machine capable of either orchestrating or executing a scripted pipeline.
16-
* *Stage*: A conceptually distinct subset of tasks performed in a pipeline. Plugins or user interfaces often use this block to display the status or progress of tasks.
16+
* *Stage*: A conceptually distinct subset of tasks performed in a pipeline. Plug-ins or user interfaces often use this block to display the status or progress of tasks.
1717
* **Step**: A single task that specifies the exact action to be taken, either by using a command or a script.
1818

1919
== OpenShift Pipelines terminology

modules/jt-comparison-of-jenkins-openshift-pipelines-execution-models.adoc

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -14,5 +14,5 @@ Jenkins and OpenShift Pipelines offer similar functions but are different in arc
1414
|Jenkins|OpenShift Pipelines
1515
|Jenkins has a controller node. Jenkins runs pipelines and steps centrally, or orchestrates jobs running in other nodes.|OpenShift Pipelines is serverless and distributed, and there is no central dependency for execution.
1616
|Containers are launched by the Jenkins controller node through the pipeline.|OpenShift Pipelines adopts a 'container-first' approach, where every step runs as a container in a pod (equivalent to nodes in Jenkins).
17-
|Extensibility is achieved by using plugins.|Extensibility is achieved by using tasks in Tekton Hub or by creating custom tasks and scripts.
17+
|Extensibility is achieved by using plug-ins.|Extensibility is achieved by using tasks in Tekton Hub or by creating custom tasks and scripts.
1818
|===

modules/jt-examples-of-common-use-cases.adoc

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -9,7 +9,7 @@
99
Both Jenkins and OpenShift Pipelines offer capabilities for common CI/CD use cases, such as:
1010

1111
* Compiling, building, and deploying images using Apache Maven
12-
* Extending the core capabilities by using plugins
12+
* Extending the core capabilities by using plug-ins
1313
* Reusing shareable libraries and custom scripts
1414

1515
== Running a Maven pipeline in Jenkins and OpenShift Pipelines
@@ -151,12 +151,12 @@ spec:
151151
152152
----
153153

154-
== Extending the core capabilities of Jenkins and OpenShift Pipelines by using plugins
155-
Jenkins has the advantage of a large ecosystem of numerous plugins developed over the years by its extensive user base. You can search and browse the plugins in the link:https://plugins.jenkins.io/[Jenkins Plugin Index].
154+
== Extending the core capabilities of Jenkins and OpenShift Pipelines by using plug-ins
155+
Jenkins has the advantage of a large ecosystem of numerous plug-ins developed over the years by its extensive user base. You can search and browse the plug-ins in the link:https://plugins.jenkins.io/[Jenkins Plug-in Index].
156156

157157
OpenShift Pipelines also has many tasks developed and contributed by the community and enterprise users. A publicly available catalog of reusable OpenShift Pipelines tasks are available in the link:https://hub.tekton.dev/[Tekton Hub].
158158

159-
In addition, OpenShift Pipelines incorporates many of the plugins of the Jenkins ecosystem within its core capabilities. For example, authorization is a critical function in both Jenkins and OpenShift Pipelines. While Jenkins ensures authorization using the link:https://plugins.jenkins.io/role-strategy/[Role-based Authorization Strategy] plugin, OpenShift Pipelines uses OpenShift's built-in Role-based Access Control system.
159+
In addition, OpenShift Pipelines incorporates many of the plug-ins of the Jenkins ecosystem within its core capabilities. For example, authorization is a critical function in both Jenkins and OpenShift Pipelines. While Jenkins ensures authorization using the link:https://plugins.jenkins.io/role-strategy/[Role-based Authorization Strategy] plug-in, OpenShift Pipelines uses OpenShift's built-in Role-based Access Control system.
160160

161161
== Sharing reusable code in Jenkins and OpenShift Pipelines
162162
Jenkins link:https://www.jenkins.io/doc/book/pipeline/shared-libraries/[shared libraries] provide reusable code for parts of Jenkins pipelines. The libraries are shared between link:https://www.jenkins.io/doc/book/pipeline/jenkinsfile/[Jenkinsfiles] to create highly modular pipelines without code repetition.

modules/jt-migrating-from-jenkins-plugins-to-openshift-pipelines-hub-tasks.adoc

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -5,11 +5,11 @@
55
:_content-type: PROCEDURE
66

77
[id="jt-migrating-from-jenkins-plugins-to-openshift-pipelines-hub-tasks_{context}"]
8-
= Migrating from Jenkins plugins to Tekton Hub tasks
8+
= Migrating from Jenkins plug-ins to Tekton Hub tasks
99

10-
You can extend the capability of Jenkins by using link:https://plugins.jenkinsci.org[plugins]. To achieve similar extensibility in OpenShift Pipelines, use any of the tasks available from link:https://hub.tekton.dev[Tekton Hub].
10+
You can extend the capability of Jenkins by using link:https://plugins.jenkinsci.org[plug-ins]. To achieve similar extensibility in OpenShift Pipelines, use any of the tasks available from link:https://hub.tekton.dev[Tekton Hub].
1111

12-
For example, consider the link:https://hub.tekton.dev/tekton/task/git-clone[git-clone] task in Tekton Hub, which corresponds to the link:https://plugins.jenkins.io/git/[git plugin] for Jenkins.
12+
For example, consider the link:https://hub.tekton.dev/tekton/task/git-clone[git-clone] task in Tekton Hub, which corresponds to the link:https://plugins.jenkins.io/git/[git plug-in] for Jenkins.
1313

1414
.Example: `git-clone` task from Tekton Hub
1515
[source,yaml,subs="attributes+"]

0 commit comments

Comments
 (0)