How to containerize kafka-kinesis-connector?

0

I have an on-prem data pipeline with MQTT + Kafka, each containerized locally. Now, I want to enable the upstream connection to the Cloud/Internet with AWS Kinesis, but I need a Kafka/Kinesis connector.

version: '3'
services:  
  nodered:
    container_name: nodered
    image: nodered/node-red
    ports:
      - "1880:1880"
    volumes:
      - ./nodered:/data
    depends_on:
      - mosquitto
    environment:
      - TZ=America/Toronto
      - NODE_RED_ENABLE_PROJECTS=true
    restart: always
  mosquitto:
    image: eclipse-mosquitto
    container_name: mqtt
    restart: always
    ports:
      - "1883:1883"
    volumes:
      - "./mosquitto/config:/mosquitto/config"
      - "./mosquitto/data:/mosquitto/data"
      - "./mosquitto/log:/mosquitto/log"
    environment:
      - TZ=America/Toronto
    user: "${PUID}:${PGID}"
  portainer:
    ports:
      - "9000:9000"
    container_name: portainer
    restart: always
    volumes:
      - "/var/run/docker.sock:/var/run/docker.sock"
      - "./portainer/portainer_data:/data"
    image: portainer/portainer-ce
  zookeeper:
    image: zookeeper:3.4
    container_name: zookeeper
    ports:
      - "2181:2181"
    volumes:
      - "zookeeper_data:/data"
  kafka:
    image: wurstmeister/kafka:1.0.0
    container_name: kafka
    ports:
      - "9092:9092"
      - "9093:9093"
    volumes:
      - "kafka_data:/data"
    environment:
      - KAFKA_ZOOKEEPER_CONNECT=10.0.0.129:2181
      - KAFKA_ADVERTISED_HOST_NAME=10.0.0.129
      - JMX_PORT=9093
      - KAFKA_ADVERTISED_PORT=9092
      - KAFKA_LOG_RETENTION_HOURS=1
      - KAFKA_MESSAGE_MAX_BYTES=10000000
      - KAFKA_REPLICA_FETCH_MAX_BYTES=10000000
      - KAFKA_GROUP_MAX_SESSION_TIMEOUT_MS=60000
      - KAFKA_NUM_PARTITIONS=2
      - KAFKA_DELETE_RETENTION_MS=1000
    depends_on:
      - zookeeper
    restart: on-failure
  cmak:
    image:  hlebalbau/kafka-manager:1.3.3.16
    container_name: kafka-manager
    restart: always
    depends_on:
      - kafka
      - zookeeper
    ports:
      - "9080:9080"
    environment:
      - ZK_HOSTS=10.0.0.129
      - APPLICATION_SECRET=letmein
    command: -Dconfig.file=/kafka-manager/conf/application.conf -Dapplication.home=/kafkamanager -Dhttp.port=9080

volumes:
  zookeeper_data:
    driver: local
  kafka_data:
    driver: local

I found this one from your labs: https://github.com/awslabs/kinesis-kafka-connector

Again, I run everything from a docker-compose and that works, but now I'm not sure if there's either an existing image or documentation that can help me figure out how to containerize this connector. Will I have to create my own custom image via a DockerFile? Any examples?

Thank you.

1 Answer
0

The Github Readme gives the command for building the jar file.

You should be able to create a docker image by, as you rightly mentioned, creating a Dockerfile and using the docker build command. There is no mention in the Github Readme of a pre-built container image.

profile pictureAWS
EXPERT
answered 2 years ago

You are not logged in. Log in to post an answer.

A good answer clearly answers the question and provides constructive feedback and encourages professional growth in the question asker.

Guidelines for Answering Questions