- 최신
- 최다 투표
- 가장 많은 댓글
Hello,
I am able to use the Confluent Kafka jdbc connect with MSK and integrated with Glue schema registry(GSR) with the below steps. Posting the steps here, in case if it helps Note: I am using mysql as my source instead of Oracle
- Collect the below jars
- Build GSR avro schema converter jar
wget https://github.com/awslabs/aws-glue-schema-registry/archive/refs/tags/v1.1.8.zip
unzip v1.1.8.zip
cd aws-glue-schema-registry
mvn clean install
mvn dependency:copy-dependencies
A jar file with name schema-registry-kafkaconnect-converter-1.1.8.jar gets created in the directory avro-kafkaconnect-converter/target/
-
Downloaded the mysql connector jar mysql-connector-java-8.0.29.jar from Mysql site. You may need to download Oracle jar here
-
Get the Kafka jdbc connect jar kafka-connect-jdbc-10.4.1.jar from https://www.confluent.io/hub/confluentinc/kafka-connect-jdbc?_ga=2.120273672.1435912287.1652838995-1650791811.1631804226
I zipped all the above 3 jars and uploaded the zip file into an s3 bucket
- I created an MSK custom plugin using the above file in s3 bucket
- I created a simple MSK cluster(without any authentication) in the private subnets of my VPC which has a route to internet via NAT gateway
- I created a topic with the same name as the mysql table
- I created an MSK connector from the plugin created in (2) with the config like below
connector.class=io.confluent.connect.jdbc.JdbcSourceConnector
connection.url=jdbc:mysql://myip:3306/mydb
connection.user=XXXXX
connection.password=XXXX
table.whitelist=mytbl
tasks.max=5
mode=bulk
key.converter= org.apache.kafka.connect.storage.StringConverter
key.converter.schemas.enable= true
key.converter.avroRecordType=GENERIC_RECORD
key.converter.region=us-east-1
key.converter.registry.name=testregistry
key.converter.schemaAutoRegistrationEnabled=true
value.converter= com.amazonaws.services.schemaregistry.kafkaconnect.AWSKafkaAvroConverter
value.converter.schemas.enable=true
value.converter.avroRecordType=GENERIC_RECORD
value.converter.region=us-east-1
value.converter.registry.name=testregistry
value.converter.schemaAutoRegistrationEnabled= true
Ref links
https://docs.confluent.io/kafka-connect-jdbc/current/source-connector/source_config_options.html#jdbc-source-configs https://docs.confluent.io/platform/current/schema-registry/connect.html https://aws.amazon.com/blogs/big-data/evolve-json-schemas-in-amazon-msk-and-amazon-kinesis-data-streams-with-the-aws-glue-schema-registry/
After completing all of the above steps, the MSK JDBC connect is able to extract the table and push the rows into the MSK topic.
관련 콘텐츠
- AWS 공식업데이트됨 3년 전
- AWS 공식업데이트됨 3년 전
- AWS 공식업데이트됨 일 년 전