Skip to content

Commit 802d3a9

Browse files
committed
Moving the docker file
1 parent f853116 commit 802d3a9

File tree

3 files changed

+30
-30
lines changed

3 files changed

+30
-30
lines changed
File renamed without changes.

CONTRIBUTING.md

Lines changed: 24 additions & 24 deletions
Original file line numberDiff line numberDiff line change
@@ -5,7 +5,7 @@ distribution.
55
### Requirements:
66
* MarkLogic Server 11+
77
* Java version 17 is required to use the Gradle tools.
8-
Additionally, SonarQube requires the use of Java 17.
8+
Additionally, SonarQube requires the use of Java 17.
99

1010
See [the Confluent compatibility matrix](https://docs.confluent.io/platform/current/installation/versions-interoperability.html#java)
1111
for more information. After installing your desired version of Java, ensure that the `JAVA_HOME` environment variable
@@ -21,7 +21,7 @@ Note that you do not need to install [Gradle](https://gradle.org/) - the "gradle
2121
appropriate version of Gradle if you do not have it installed already.
2222

2323
## Virtual Server Preparation
24-
The project includes a docker-compose file that includes MarkLogic, SonarQube with a Postgres server, and Confluent
24+
The project includes a docker-compose file in the repository root that includes MarkLogic, SonarQube with a Postgres server, and Confluent
2525
Platform servers.
2626

2727
### Confluent Platform
@@ -33,14 +33,14 @@ The Confluent Platform servers in this docker-compose file are based on the Conf
3333
[Install a Confluent Platform cluster in Docker using a Confluent docker-compose file](https://docs.confluent.io/platform/current/platform-quickstart.html).
3434

3535
## Docker Cluster Preparation
36-
To setup the docker cluster, use the docker-compose file in the "test-app" directory to build the Docker cluster with
36+
To setup the docker cluster, use the docker-compose file in the repository root to build the Docker cluster with
3737
the command:
3838
```
39-
docker-compose -f docker-compose.yml up -d --build
39+
docker-compose up -d --build
4040
```
4141
When the setup is complete, you should be able to run
4242
```
43-
docker-compose -f docker-compose.yml ps
43+
docker-compose ps
4444
```
4545
and see results similar to the following.
4646
```
@@ -61,17 +61,17 @@ schema-registry confluentinc/cp-schema-registry:7.6.1
6161
You can now visit several web applications:
6262
* http://localhost:8000 to access the MarkLogic server.
6363
* http://localhost:9000 to use the SonarQube server as described in the "Running Sonar Code Analysis"
64-
section below.
64+
section below.
6565
* http://localhost:9021 to access
66-
[Confluent's Control Center GUI](https://docs.confluent.io/platform/current/control-center/index.html) application.
67-
Within Control Center, click on "controlcenter.cluster" to access the configuration for the Kafka cluster.
66+
[Confluent's Control Center GUI](https://docs.confluent.io/platform/current/control-center/index.html) application.
67+
Within Control Center, click on "controlcenter.cluster" to access the configuration for the Kafka cluster.
6868

6969
## MarkLogic Preparation
7070
To prepare the MarkLogic server for automated testing as well as testing with the Confluent Platform, the Data Hub based
71-
application must be deployed. From the "test-app" directory, follow these steps:
71+
application must be deployed. From the root directory, follow these steps:
7272
1. Run `./gradlew hubInit`
7373
2. Edit gradle-local.properties and set `mlUsername` and `mlPassword`
74-
3. Run `./gradlew -i mlDeploy`
74+
3. Run `./gradlew -i test-app:mlDeploy`
7575

7676
Note: If you change the version of Data Hub Framework used by this project, you should also delete the following directories:
7777
* 'test-app/src/main/entity-config'
@@ -102,7 +102,7 @@ To configure the SonarQube service, perform the following steps:
102102
9. In the "Provide a token" panel, click on "Generate". Copy the token.
103103
10. Click the "Continue" button.
104104
11. Update `systemProp.sonar.token=<Replace With Your Sonar Token>` in `gradle-local.properties` in the root of your
105-
project.
105+
project.
106106

107107
To run the SonarQube analysis, run the following Gradle task in the root directory, which will run all the tests with
108108
code coverage and then generate a quality report with SonarQube:
@@ -218,8 +218,8 @@ contents of the `data-hub-FINAL` database.
218218

219219
## Debugging the MarkLogic Kafka connector
220220

221-
The main mechanism for debugging an instance of the MarkLogic Kafka connector is by examining logs from the
222-
connector. You can access those, along with logging from Kafka Connect and all other connectors, by running the
221+
The main mechanism for debugging an instance of the MarkLogic Kafka connector is by examining logs from the
222+
connector. You can access those, along with logging from Kafka Connect and all other connectors, by running the
223223
following:
224224

225225
confluent local services connect log -f
@@ -228,43 +228,43 @@ See [the log command docs](https://docs.confluent.io/confluent-cli/current/comma
228228
for more information.
229229

230230
You can also customize Confluent logging by [adjusting the log4j file for Kafka Connect](https://docs.confluent.io/platform/current/connect/logging.html#viewing-kconnect-logs).
231-
For example, to prevent some logging from Kafka Connect and from the Java Client DMSDK, add the following to the
231+
For example, to prevent some logging from Kafka Connect and from the Java Client DMSDK, add the following to the
232232
`$CONFLUENT_HOME/etc/kafka/connect-log4j.properties` file:
233233

234234
log4j.logger.org.apache.kafka=WARN
235235
log4j.logger.com.marklogic.client.datamovement=WARN
236236

237237

238238
# Testing with basic Apache Kafka
239-
The primary reason to test the MarkLogic Kafka connector via a regular Kafka distribution is that the development
240-
cycle is much faster and more reliable - i.e. you can repeatedly redeploy the connector and restart Kafka Connect to
239+
The primary reason to test the MarkLogic Kafka connector via a regular Kafka distribution is that the development
240+
cycle is much faster and more reliable - i.e. you can repeatedly redeploy the connector and restart Kafka Connect to
241241
test changes, and Kafka Connect will continue to work fine. This is particularly useful when the changes you're testing
242242
do not require testing the GUI provided by Confluent Control Center.
243243

244-
To get started, these instructions assume that you already have an instance of Apache Kafka installed; the
245-
[Kafka Quickstart](https://kafka.apache.org/quickstart) instructions provide an easy way of accomplishing this. Perform
244+
To get started, these instructions assume that you already have an instance of Apache Kafka installed; the
245+
[Kafka Quickstart](https://kafka.apache.org/quickstart) instructions provide an easy way of accomplishing this. Perform
246246
step 1 of these instructions before proceeding.
247247

248248
Next, configure your Gradle properties to point to your Kafka installation and deploy the connector there:
249249

250250
1. Configure `kafkaHome` in gradle-local.properties - e.g. `kafkaHome=/Users/myusername/kafka_2.13-2.8.1`
251251
2. Configure `kafkaMlUsername` and `kafkaMlPassword` in gradle-local.properties, setting these to a MarkLogic user that
252-
is able to write documents to MarkLogic. These values will be used to populate the
252+
is able to write documents to MarkLogic. These values will be used to populate the
253253
`ml.connection.username` and `ml.connection.password` connector properties.
254254
3. Run `./gradlew clean deploy` to build a jar and copy it and the config property files to your Kafka installation
255255

256256
[Step 2 in the Kafka Quickstart guide](https://kafka.apache.org/quickstart) provides the instructions for starting the
257-
separate Zookeeper and Kafka server processes. You'll need to run these commands from your Kafka installation
258-
directory. As of August 2022, those commands are (these seem very unlikely to change and thus are included here for
257+
separate Zookeeper and Kafka server processes. You'll need to run these commands from your Kafka installation
258+
directory. As of August 2022, those commands are (these seem very unlikely to change and thus are included here for
259259
convenience):
260260

261261
bin/zookeeper-server-start.sh config/zookeeper.properties
262262

263-
and
263+
and
264264

265265
bin/kafka-server-start.sh config/server.properties
266266

267-
Next, start the Kafka connector in standalone mode (also from the Kafka home directory). To run the sink connector,
267+
Next, start the Kafka connector in standalone mode (also from the Kafka home directory). To run the sink connector,
268268
use the following command:
269269

270270
bin/connect-standalone.sh config/marklogic-connect-standalone.properties config/marklogic-sink.properties
@@ -278,7 +278,7 @@ You'll see a fair amount of logging from Kafka itself; near the end of the loggi
278278
`RowManagerSourceTask` to ensure that the connector has started up correctly.
279279

280280
## Sink Connector Testing
281-
To test out the sink connector, you can use the following command to enter a CLI that allows you to manually send
281+
To test out the sink connector, you can use the following command to enter a CLI that allows you to manually send
282282
messages to the `marklogic` topic that the connector is configured by default to read from:
283283

284284
bin/kafka-console-producer.sh --broker-list localhost:9092 --topic marklogic

test-app/docker-compose.yml renamed to docker-compose.yml

Lines changed: 6 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -2,10 +2,10 @@
22
name: marklogic-kafka-confluent
33
services:
44

5-
# This compose file is based on:
6-
# This guide - https://docs.confluent.io/platform/current/platform-quickstart.html#step-6-uninstall-and-clean-up
7-
# This compose file - https://raw.githubusercontent.com/confluentinc/cp-all-in-one/7.6.1-post/cp-all-in-one-kraft/docker-compose.yml
8-
# Extended to include a MarkLogic container
5+
# This compose file is based on:
6+
# This guide - https://docs.confluent.io/platform/current/platform-quickstart.html#step-6-uninstall-and-clean-up
7+
# This compose file - https://raw.githubusercontent.com/confluentinc/cp-all-in-one/7.6.1-post/cp-all-in-one-kraft/docker-compose.yml
8+
# Extended to include a MarkLogic container
99

1010
broker:
1111
image: confluentinc/cp-kafka:7.6.1
@@ -86,7 +86,7 @@ services:
8686
CONNECT_PLUGIN_PATH: "/usr/share/java,/usr/share/confluent-hub-components,/usr/share/confluent-marklogic-components"
8787
CONNECT_LOG4J_LOGGERS: org.apache.zookeeper=ERROR,org.I0Itec.zkclient=ERROR,org.reflections=ERROR
8888
volumes:
89-
- ./docker/confluent-marklogic-components:/usr/share/confluent-marklogic-components
89+
- ./test-app/docker/confluent-marklogic-components:/usr/share/confluent-marklogic-components
9090

9191
control-center:
9292
image: confluentinc/cp-enterprise-control-center:7.6.1
@@ -201,7 +201,7 @@ services:
201201
- MARKLOGIC_ADMIN_USERNAME=admin
202202
- MARKLOGIC_ADMIN_PASSWORD=admin
203203
volumes:
204-
- ./docker/marklogic/logs:/var/opt/MarkLogic/Logs
204+
- ./test-app/docker/marklogic/logs:/var/opt/MarkLogic/Logs
205205
ports:
206206
- "8000-8002:8000-8002"
207207
- "8010-8013:8010-8013"

0 commit comments

Comments
 (0)