Skip to content

Commit 50f847a

Browse files
authored
Merge pull request #7 from shibbirmcc/develop
Develop
2 parents 4b20cf4 + c73d48d commit 50f847a

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

60 files changed

+3825
-1
lines changed

.env

Lines changed: 9 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,9 @@
1+
DB_HOST=127.0.0.1
2+
DB_USER=serviceuser
3+
DB_PASSWORD=servicepassword
4+
DB_NAME=servicedatabase
5+
DB_PORT=5432
6+
JWT_SECRET=snakeexactwhichrepliedpothearthasdigplentymathemat
7+
PASSWORD_DELIVERY_TYPE=KAFKA_TOPIC
8+
KAFKA_BROKERS=localhost:9092
9+
KAFKA_TOPIC=credentials

.env.test

Lines changed: 9 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,9 @@
1+
DB_HOST=127.0.0.1
2+
DB_USER=serviceuser
3+
DB_PASSWORD=servicepassword
4+
DB_NAME=servicedatabase
5+
DB_PORT=5432
6+
JWT_SECRET=snakeexactwhichrepliedpothearthasdigplentymathemat
7+
PASSWORD_DELIVERY_TYPE=KAFKA_TOPIC
8+
KAFKA_BROKERS=localhost:9092
9+
KAFKA_TOPIC=credentials

.github/workflows/ci.yml

Lines changed: 42 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,42 @@
1+
name: Go CI
2+
3+
on:
4+
push:
5+
branches:
6+
- develop
7+
pull_request:
8+
branches:
9+
- master
10+
11+
jobs:
12+
test:
13+
runs-on: ubuntu-latest
14+
15+
steps:
16+
- name: Checkout code
17+
uses: actions/checkout@v4 # Update to the latest version to support Node 20
18+
19+
- name: Set up Go
20+
uses: actions/setup-go@v4
21+
with:
22+
go-version: 1.22.8 # Updated to the specified Go version
23+
24+
- name: Install dependencies
25+
run: go mod download
26+
27+
- name: Run tests with coverage
28+
run: |
29+
# Run tests and generate a coverage profile
30+
go test -v -coverprofile=coverage.txt ./...
31+
# Filter out the "tests" directory from the coverage profile
32+
grep -v -E "(/tests/|/mocks/)" coverage.txt > coverage_filtered.txt
33+
34+
35+
- name: Upload coverage to Codecov
36+
uses: codecov/codecov-action@v3
37+
with:
38+
files: coverage_filtered.txt # Use the filtered coverage file
39+
token: ${{ secrets.CODECOV_TOKEN }} # Add your Codecov token in GitHub Secrets
40+
flags: unittests
41+
name: codecov-coverage
42+
fail_ci_if_error: true

.gitignore

Lines changed: 4 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,4 @@
1+
coverage.out
2+
.idea
3+
*.iml
4+
.vscode

KAFKA.md

Lines changed: 167 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,167 @@
1+
# **Running Kafka during local execution and tests**
2+
3+
## Local Execution Kafka Commands
4+
5+
### Option 1: Modern KRaft Mode (Recommended)
6+
Using the official Apache Kafka image with KRaft mode (no Zookeeper required):
7+
8+
```shell
9+
# Clean up any existing containers
10+
sudo docker container prune
11+
12+
# Run Kafka in KRaft mode
13+
sudo docker run -d --name kafka \
14+
-p 9092:9092 \
15+
-e KAFKA_NODE_ID=1 \
16+
-e KAFKA_PROCESS_ROLES=broker,controller \
17+
-e KAFKA_CONTROLLER_QUORUM_VOTERS=1@localhost:9093 \
18+
-e KAFKA_LISTENERS=PLAINTEXT://0.0.0.0:9092,CONTROLLER://0.0.0.0:9093 \
19+
-e KAFKA_ADVERTISED_LISTENERS=PLAINTEXT://localhost:9092 \
20+
-e KAFKA_LISTENER_SECURITY_PROTOCOL_MAP=PLAINTEXT:PLAINTEXT,CONTROLLER:PLAINTEXT \
21+
-e KAFKA_CONTROLLER_LISTENER_NAMES=CONTROLLER \
22+
-e KAFKA_INTER_BROKER_LISTENER_NAME=PLAINTEXT \
23+
-e KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR=1 \
24+
-e KAFKA_TRANSACTION_STATE_LOG_REPLICATION_FACTOR=1 \
25+
-e KAFKA_TRANSACTION_STATE_LOG_MIN_ISR=1 \
26+
-e KAFKA_LOG_DIRS=/tmp/kraft-combined-logs \
27+
-e KAFKA_AUTO_CREATE_TOPICS_ENABLE=true \
28+
apache/kafka:3.8.1
29+
30+
# Create topic
31+
sudo docker exec -it kafka /opt/kafka/bin/kafka-topics.sh \
32+
--create --topic credentials \
33+
--bootstrap-server localhost:9092 \
34+
--partitions 1 --replication-factor 1
35+
36+
# Console consumer
37+
sudo docker exec -it kafka /opt/kafka/bin/kafka-console-consumer.sh \
38+
--bootstrap-server localhost:9092 --topic credentials --from-beginning
39+
40+
# Console producer
41+
sudo docker exec -it kafka /opt/kafka/bin/kafka-console-producer.sh \
42+
--bootstrap-server localhost:9092 --topic credentials
43+
```
44+
45+
### Option 2: Legacy Zookeeper Mode
46+
Using Confluent images with separate Zookeeper (for compatibility with older setups):
47+
```shell
48+
sudo docker container prune
49+
50+
sudo docker run -d --name zookeeper \
51+
-p 2181:2181 \
52+
-e ZOOKEEPER_CLIENT_PORT=2181 \
53+
-e ZOOKEEPER_TICK_TIME=2000 \
54+
confluentinc/cp-zookeeper:latest
55+
56+
57+
58+
sudo docker run -d \
59+
--name kafka \
60+
-p 9092:9092 \
61+
-e KAFKA_BROKER_ID=1 \
62+
-e KAFKA_ZOOKEEPER_CONNECT=host.docker.internal:2181 \
63+
-e KAFKA_ADVERTISED_LISTENERS=PLAINTEXT://localhost:9092 \
64+
-e KAFKA_LISTENERS=PLAINTEXT://0.0.0.0:9092 \
65+
-e KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR=1 \
66+
confluentinc/cp-kafka:latest
67+
68+
sudo docker exec -it 6f45beaa852c kafka-topics --bootstrap-server localhost:9092 --create --replication-factor 1 --partitions 1 --topic credentials
69+
70+
sudo docker exec -it 6f45beaa852c kafka-console-consumer --bootstrap-server localhost:9092 --topic credentials --from-beginning
71+
72+
## Console producer
73+
sudo docker exec -it 636092f818f5 kafka-console-producer.sh --bootstrap-server localhost:9092 --topic credentials
74+
```
75+
76+
## TestContainer Execution
77+
78+
The testcontainer implementation now uses the official Apache Kafka image with KRaft mode, eliminating the need for Zookeeper. This provides a simpler, more modern setup.
79+
80+
### Key Features:
81+
- **No Zookeeper Required**: Uses Kafka's KRaft mode (Kafka Raft metadata mode)
82+
- **Official Apache Kafka Image**: Uses `apache/kafka:3.8.1`
83+
- **Automatic Topic Creation**: Creates the required topic programmatically
84+
- **Simplified Configuration**: Fewer moving parts and dependencies
85+
86+
### TestContainer Configuration:
87+
The testcontainer is configured with the following KRaft mode settings:
88+
89+
```go
90+
kafkaReq := testcontainers.ContainerRequest{
91+
Image: "apache/kafka:3.8.1", // Latest stable Kafka image with Kraft mode
92+
ExposedPorts: []string{"9092/tcp"}, // Only Kafka port needed (no Zookeeper)
93+
Env: map[string]string{
94+
// Kraft mode configuration
95+
"KAFKA_NODE_ID": "1",
96+
"KAFKA_PROCESS_ROLES": "broker,controller",
97+
"KAFKA_CONTROLLER_QUORUM_VOTERS": "1@localhost:9093",
98+
"KAFKA_LISTENERS": "PLAINTEXT://0.0.0.0:9092,CONTROLLER://0.0.0.0:9093",
99+
"KAFKA_ADVERTISED_LISTENERS": "PLAINTEXT://localhost:9092",
100+
"KAFKA_LISTENER_SECURITY_PROTOCOL_MAP": "PLAINTEXT:PLAINTEXT,CONTROLLER:PLAINTEXT",
101+
"KAFKA_CONTROLLER_LISTENER_NAMES": "CONTROLLER",
102+
"KAFKA_INTER_BROKER_LISTENER_NAME": "PLAINTEXT",
103+
"KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR": "1",
104+
"KAFKA_TRANSACTION_STATE_LOG_REPLICATION_FACTOR": "1",
105+
"KAFKA_TRANSACTION_STATE_LOG_MIN_ISR": "1",
106+
"KAFKA_LOG_DIRS": "/tmp/kraft-combined-logs",
107+
"KAFKA_AUTO_CREATE_TOPICS_ENABLE": "true",
108+
},
109+
WaitingFor: wait.ForLog("Kafka Server started").WithStartupTimeout(120 * time.Second),
110+
}
111+
```
112+
113+
### Benefits of KRaft Mode:
114+
1. **Simplified Architecture**: No separate Zookeeper cluster to manage
115+
2. **Better Performance**: Reduced latency and improved throughput
116+
3. **Easier Scaling**: Simpler cluster management and scaling operations
117+
4. **Official Support**: Uses the official Apache Kafka image maintained by the Kafka team
118+
5. **Future-Proof**: KRaft is the future of Kafka (Zookeeper is being deprecated)
119+
120+
### Usage in Tests:
121+
The testcontainer automatically:
122+
- Starts a Kafka broker in KRaft mode
123+
- Waits for Kafka to be fully ready
124+
- Creates the required topic specified in `KAFKA_TOPIC` environment variable
125+
- Sets the `KAFKA_BROKERS` environment variable for the application to use
126+
- Provides a teardown function to clean up resources
127+
128+
### Environment Variables Required:
129+
- `KAFKA_TOPIC`: The name of the Kafka topic to create for testing
130+
- `KAFKA_BROKERS`: Automatically set by the testcontainer setup function
131+
132+
### Manual Testing with Official Image:
133+
If you want to run Kafka manually for testing, you can use the same official image:
134+
135+
```shell
136+
# Run Kafka in KRaft mode
137+
docker run -d --name kafka \
138+
-p 9092:9092 \
139+
-e KAFKA_NODE_ID=1 \
140+
-e KAFKA_PROCESS_ROLES=broker,controller \
141+
-e KAFKA_CONTROLLER_QUORUM_VOTERS=1@localhost:9093 \
142+
-e KAFKA_LISTENERS=PLAINTEXT://0.0.0.0:9092,CONTROLLER://0.0.0.0:9093 \
143+
-e KAFKA_ADVERTISED_LISTENERS=PLAINTEXT://localhost:9092 \
144+
-e KAFKA_LISTENER_SECURITY_PROTOCOL_MAP=PLAINTEXT:PLAINTEXT,CONTROLLER:PLAINTEXT \
145+
-e KAFKA_CONTROLLER_LISTENER_NAMES=CONTROLLER \
146+
-e KAFKA_INTER_BROKER_LISTENER_NAME=PLAINTEXT \
147+
-e KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR=1 \
148+
-e KAFKA_TRANSACTION_STATE_LOG_REPLICATION_FACTOR=1 \
149+
-e KAFKA_TRANSACTION_STATE_LOG_MIN_ISR=1 \
150+
-e KAFKA_LOG_DIRS=/tmp/kraft-combined-logs \
151+
-e KAFKA_AUTO_CREATE_TOPICS_ENABLE=true \
152+
apache/kafka:3.8.1
153+
154+
# Create topic manually if needed
155+
docker exec -it kafka /opt/kafka/bin/kafka-topics.sh \
156+
--create --topic credentials \
157+
--bootstrap-server localhost:9092 \
158+
--partitions 1 --replication-factor 1
159+
160+
# Console consumer
161+
docker exec -it kafka /opt/kafka/bin/kafka-console-consumer.sh \
162+
--bootstrap-server localhost:9092 --topic credentials --from-beginning
163+
164+
# Console producer
165+
docker exec -it kafka /opt/kafka/bin/kafka-console-producer.sh \
166+
--bootstrap-server localhost:9092 --topic credentials
167+
```

0 commit comments

Comments
 (0)