Skip to content

Issue when attempting to access a container inside and outside Docker environment #199

@wedla

Description

@wedla

Hello folks, how are you doing?! Hope you're well and safe.

I'm having an issue when using the landoop/fast-data-dev image on Docker. I have the following docker-compose file:

version: "3.8"

networks:
  minha-rede:
    driver: bridge

services:

  postgresql-master:
    hostname: postgresqlmaster
    image: postgres:12.8
    restart: "no"
    environment:
      POSTGRES_USER: ***
      POSTGRES_PASSWORD: ***
      POSTGRES_PGAUDIT_LOG: READ, WRITE
      POSTGRES_DB: postgres
      PG_REP_USER: ***
      PG_REP_PASSWORD: ***
      PG_VERSION: 12
      DB_PORT: 5432
    ports:
      - "5432:5432"
    volumes:
      - ./init_database.sql:/docker-entrypoint-initdb.d/init_database.sql
    healthcheck:
      test: pg_isready -U $$POSTGRES_USER -d postgres
      start_period: 10s
      interval: 5s
      timeout: 5s
      retries: 10
    networks:
      - minha-rede

  kafka-cluster:
    image: landoop/fast-data-dev:cp3.3.0
    environment:
      ADV_HOST: kafka-cluster
      RUNTESTS: 0
      FORWARDLOGS: 0
      SAMPLEDATA: 0
    ports:
      - 32181:2181
      - 3030:3030
      - 8081-8083:8081-8083
      - 9581-9585:9581-9585
      - 9092:9092
      - 29092:29092
    healthcheck:
      test: ["CMD-SHELL", "/opt/confluent/bin/kafka-topics --list --zookeeper localhost:2181"]
      interval: 15s
      timeout: 5s
      retries: 10
      start_period: 30s
    networks:
      - minha-rede

  kafka-topics-setup:
    image: fast-data-dev:cp3.3.0
    environment:
      ADV_HOST: kafka-cluster
      RUNTESTS: 0
      FORWARDLOGS: 0
      SAMPLEDATA: 0
    command:
      - /bin/bash
      - -c
      - |
        kafka-topics --zookeeper kafka-cluster:2181 --create --topic topic-name-1 --partitions 3 --replication-factor 1
        kafka-topics --zookeeper kafka-cluster:2181 --create --topic topic-name-2 --partitions 3 --replication-factor 1
        kafka-topics --zookeeper kafka-cluster:2181 --create --topic topic-name-3 --partitions 3 --replication-factor 1
        kafka-topics --zookeeper kafka-cluster:2181 --list
    depends_on:
      kafka-cluster:
        condition: service_healthy
    networks:
      - minha-rede

  app:
    build:
      context: ../app
      dockerfile: ../app/DockerfileTaaC
      args:
        HTTPS_PROXY: ${PROXY}
        HTTP_PROXY: ${PROXY}
        NO_PROXY: ${NO_PROXY}
    environment:
      LOG_LEVEL: "DEBUG"
      SPRING_PROFILES_ACTIVE: "local"
      APP_ENABLE_RECEIVER: "true"
      APP_ENABLE_SENDER: "true"
      ENVIRONMENT: "local"
      SPRING_DATASOURCE_URL: "jdbc:postgresql://postgresql-master:5432/postgres"
      SPRING_KAFKA_PROPERTIES_SCHEMA_REGISTRY_URL: "http://kafka-cluster:8081/"
      SPRING_KAFKA_BOOTSTRAP_SERVERS: "kafka-cluster:9092"
    volumes:
      - $HOME/.m2:/root/.m2
    depends_on:
      postgresql-master:
        condition: service_healthy
      kafka-cluster:
        condition: service_healthy
      kafka-topics-setup:
        condition: service_started
    networks:
      - minha-rede

So, as you can see, I have a Spring Boot application that communicates with Kafka. So far, so good when ADV_HOST is set to the container name (kafka-cluster). The problem happens next: I also have a test application that runs outside docker. This test application has an implementation for Kafka Consumer, so it needs to access the kafka-cluster, that I tried to do in this way:

    bootstrap-servers: "localhost:9092" # Kafka bootstrap servers
    schema-registry-url: "http://localhost:8081/" # Kafka schema registry URL

The problem I'm getting is the following error:

[Thread-0] WARN org.apache.kafka.clients.NetworkClient - [Consumer clientId=consumer-TestStack-1, groupId=TestStack] Error connecting to node kafka-cluster:9092 (id: 2147483647 rack: null)
java.net.UnknownHostException: kafka-cluster: nodename nor servname provided, or not known
at java.base/java.net.Inet6AddressImpl.lookupAllHostAddr(Native Method)
at java.base/java.net.InetAddress$PlatformNameService.lookupAllHostAddr(InetAddress.java:933)
at java.base/java.net.InetAddress.getAddressesFromNameService(InetAddress.java:1519)
at java.base/java.net.InetAddress$NameServiceAddresses.get(InetAddress.java:852)
at java.base/java.net.InetAddress.getAllByName0(InetAddress.java:1509)
at java.base/java.net.InetAddress.getAllByName(InetAddress.java:1367)
at java.base/java.net.InetAddress.getAllByName(InetAddress.java:1301)
at org.apache.kafka.clients.DefaultHostResolver.resolve(DefaultHostResolver.java:27)
at org.apache.kafka.clients.ClientUtils.resolve(ClientUtils.java:110)
at org.apache.kafka.clients.ClusterConnectionStates$NodeConnectionState.currentAddress(ClusterConnectionStates.java:510)
at org.apache.kafka.clients.ClusterConnectionStates$NodeConnectionState.access$200(ClusterConnectionStates.java:467)
at org.apache.kafka.clients.ClusterConnectionStates.currentAddress(ClusterConnectionStates.java:173)
at org.apache.kafka.clients.NetworkClient.initiateConnect(NetworkClient.java:990)
at org.apache.kafka.clients.NetworkClient.ready(NetworkClient.java:301)
at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.tryConnect(ConsumerNetworkClient.java:590)
at org.apache.kafka.clients.consumer.internals.AbstractCoordinator$FindCoordinatorResponseHandler.onSuccess(AbstractCoordinator.java:898)
at org.apache.kafka.clients.consumer.internals.AbstractCoordinator$FindCoordinatorResponseHandler.onSuccess(AbstractCoordinator.java:874)
at org.apache.kafka.clients.consumer.internals.RequestFuture$1.onSuccess(RequestFuture.java:206)
at org.apache.kafka.clients.consumer.internals.RequestFuture.fireSuccess(RequestFuture.java:169)
at org.apache.kafka.clients.consumer.internals.RequestFuture.complete(RequestFuture.java:129)
at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient$RequestFutureCompletionHandler.fireCompletion(ConsumerNetworkClient.java:617)
at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.firePendingCompletedRequests(ConsumerNetworkClient.java:427)
at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:312)
at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:230)
at org.apache.kafka.clients.consumer.internals.AbstractCoordinator.ensureCoordinatorReady(AbstractCoordinator.java:265)
at org.apache.kafka.clients.consumer.internals.AbstractCoordinator.ensureCoordinatorReady(AbstractCoordinator.java:240)
at org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.coordinatorUnknownAndUnreadySync(ConsumerCoordinator.java:492)
at org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.poll(ConsumerCoordinator.java:524)
at org.apache.kafka.clients.consumer.KafkaConsumer.updateAssignmentMetadataIfNeeded(KafkaConsumer.java:1276)
at org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:1240)
at org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:1220)
at br.com.itau.teststack.tests.utils.kafka.GenericConsumer.start(GenericConsumer.java:36)
at br.com.itau.teststack.tests.utils.kafka.HandleKafka.lambda$startConsumer$0(HandleKafka.java:25)

If I set the ADV_HOST environment variable to 127.0.0.1, my test app consumer works fine, but my docker application doesn`t, with the following problem:

2025/04/16 11:15:17.335 [org.springframework.kafka.KafkaListenerEndpointContainer#0-0-C-1] [WARN ]  Connection to node 0 (/[127.0.0.1:9092](http://127.0.0.1:9092/)) could not be established. Node may not be available.

I attempted to use a network bridge in the docker-compose file, as shown, but it didn't work. Could this be a limitation? I’ve already reviewed the documentation for the fast-data-dev Docker image but couldn’t find anything relevant to my issue. Could you please assist me?

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions