MSK Connect Debezium source connector error

1

Hi,

I am trying to use a custom plugin with a Debezium postgres source connector to capture database changes into my MSK serverless cluster. From my reading, I can see that the maximum amount of partitions per serverless cluster is 2400. However, I am getting a connect error

Caused by: java.util.concurrent.ExecutionException: org.apache.kafka.common.errors.InvalidRequestException: Quota exceeded for maximum number of partitions

I am finding it hard to believe that I am going over the allotted number of partitions in my cluster- especially when I have just freshly made the cluster and there are no other topics in it. I am also using a provisioned connect config with 1 worker.

Here is my connect configuration:

connector.class=io.debezium.connector.postgresql.PostgresConnector
value.converter.schemaAutoRegistrationEnabled=true
transforms.unwrap.delete.handling.mode=rewrite
topic.creation.default.partitions=5
transforms.extractKeyFromStruct.type=org.apache.kafka.connect.transforms.ExtractField$Key
auto.create.topics.enable=true
tasks.max=1
database.history.consumer.sasl.jaas.config=software.amazon.msk.auth.iam.IAMLoginModule required;
database.history.kafka.topic=dbhistory.omitted
transforms=unwrap,extractKeyFromStruct,copyIdToKey,AddNamespace
transforms.extractKeyFromStruct.field=id
database.history.producer.sasl.client.callback.handler.class=software.amazon.msk.auth.iam.IAMClientCallbackHandler
database.history.consumer.sasl.client.callback.handler.class=software.amazon.msk.auth.iam.IAMClientCallbackHandler
transforms.AddNamespace.type=org.apache.kafka.connect.transforms.SetSchemaMetadata$Value
database.history.consumer.security.protocol=SASL_SSL
transforms.copyIdToKey.type=org.apache.kafka.connect.transforms.ValueToKey
topic.prefix=omitted
transforms.topicRename.type=org.apache.kafka.connect.transforms.RegexRouter
transforms.topicRename.replacement=$1
transforms.unwrap.drop.tombstones=false
transforms.copyIdToKey.fields=id
transforms.unwrap.type=io.debezium.transforms.ExtractNewRecordState
value.converter=io.confluent.connect.avro.AvroConverter
key.converter=org.apache.kafka.connect.storage.StringConverter
database.history.producer.sasl.mechanism=AWS_MSK_IAM
database.history.producer.sasl.jaas.config=software.amazon.msk.auth.iam.IAMLoginModule required;
database.user=omitted
database.dbname=omitted
database.history.producer.security.protocol=SASL_SSL
database.history.kafka.bootstrap.servers=omitted
database.server.name=omitted
database.port=omitted
plugin.name=pgoutput
value.converter.schema.registry.url=omitted
key.converter.schemas.enable=false
database.hostname=omitted
database.password=omitted
value.converter.schemas.enable=true
transforms.unwrap.add.fields=op,source.ts_ms
table.include.list=purchased_tickets
database.history.consumer.sasl.mechanism=AWS_MSK_IAM

As you can see, I even set "topic.creation.default.partitions=5". Note that this is in a development environment.

So I do not know how I am exceeding the quota. Has anyone else ran into this issue? Please advise, thank you.

asked 10 months ago690 views
1 Answer
1

Hello, I would like to inform you that if you have a large number of topics/partitions in your MSK Serverless cluster then you may notice the following error:

"org.apache.kafka.common.errors.InvalidRequestException: Quota exceeded for maximum number of partitions"

This error indicates that the partitions quota on the MSK Serverless is exceeded. As mentioned in the following documentation, MSK Serverless has a maximum partitions limit of 2400 for non-compacted topics and 120 for compacted topics.

Also would like to add that the partitions of the internal connector offset and config topics are also counted towards the partition count limit.

[+] Amazon MSK quota - MSK Serverless quota - https://docs.aws.amazon.com/msk/latest/developerguide/limits.html#serverless-quota

Thus in order to resolve this error, you can reduce the number of partitions by deleting some topics on the cluster. Alternatively, you can request an increase in the quota to a new limit which you define. In order to request a service limit increase you may select "Looking for service limit increases?" from the Create Case page in the AWS Support console.

For detailed cluster investigation I would request you to please reach out to AWS support with the concerned MSK cluster. Refrain from sharing any sensitive information on this post itself.

AWS
SUPPORT ENGINEER
answered 10 months ago

You are not logged in. Log in to post an answer.

A good answer clearly answers the question and provides constructive feedback and encourages professional growth in the question asker.

Guidelines for Answering Questions