返回到文章
文章格式优化

采纳

编辑于 3年前
kafka

1、新版kafka配置,好像没看到有介绍啊?

2、我的消费者总是出现这个提示,不知是不是正常的,提示如下:

Auto offset commit failed for group baiying-visualize: Commit cannot be completed since the group has already rebalanced and assigned the partitions to another member. This means that the time between subsequent calls to poll() was longer than the configured max.poll.interval.ms, which typically implies that the poll loop is spending too much time message processing. You can address this either by increasing the session timeout or by reducing the maximum size of batches returned in poll() with max.poll.records.

我的部分代码和相关配置如下:

ConsumerRecords<String, String> records = consumer.poll(100);

properties配置文件中部分配置如下:

enable.auto.commit=true
auto.commit.interval.ms=1000
key.deserializer=org.apache.kafka.common.serialization.StringDeserializer
value.deserializer=org.apache.kafka.common.serialization.StringDeserializer
max.poll.interval.ms=500 
max.poll.records=50
session.timeout.ms=30000
kafka

1、新版kafka配置,好像没看到有介绍啊?
2、我的消费者总是出现这个提示,不知是不是正常的,提示如下:
Auto offset commit failed for group baiying-visualize: Commit cannot be completed since the group has already rebalanced and assigned the partitions to another member. This means that the time between subsequent calls to poll() was longer than the configured max.poll.interval.ms, which typically implies that the poll loop is spending too much time message processing. You can address this either by increasing the session timeout or by reducing the maximum size of batches returned in poll() with max.poll.records.

我的部分代码和相关配置如下:
ConsumerRecords records = consumer.poll(100);
properties配置文件中部分配置如下:
enable.auto.commit=true
auto.commit.interval.ms=1000
key.deserializer=org.apache.kafka.common.serialization.StringDeserializer
value.deserializer=org.apache.kafka.common.serialization.StringDeserializer
max.poll.interval.ms=500
max.poll.records=50

session.timeout.ms=30000

记录

编辑于

新版kafka消费者配置问题

kafka

1、新版kafka配置,好像没看到有介绍啊?
2、我的消费者总是出现这个提示,不知是不是正常的,提示如下:
Auto offset commit failed for group baiying-visualize: Commit cannot be completed since the group has already rebalanced and assigned the partitions to another member. This means that the time between subsequent calls to poll() was longer than the configured max.poll.interval.ms, which typically implies that the poll loop is spending too much time message processing. You can address this either by increasing the session timeout or by reducing the maximum size of batches returned in poll() with max.poll.records.

我的部分代码和相关配置如下:
ConsumerRecords records = consumer.poll(100);
properties配置文件中部分配置如下:
enable.auto.commit=true
auto.commit.interval.ms=1000
key.deserializer=org.apache.kafka.common.serialization.StringDeserializer
value.deserializer=org.apache.kafka.common.serialization.StringDeserializer
max.poll.interval.ms=500
max.poll.records=50

session.timeout.ms=30000