kafka版本为kafka_2.12-0.11.0.2,
集群为三节点,配置如下
broker.id=1
listeners=PLAINTEXT://192.168.1.2:16092
advertised.listeners=PLAINTEXT://192.168.1.2::16092
num.network.threads=10
num.io.threads=20
socket.send.buffer.bytes=102400
socket.receive.buffer.bytes=102400
socket.request.max.bytes=104857600
log.dirs=/data/kafka-logs
num.partitions=40
num.recovery.threads.per.data.dir=1
offsets.topic.replication.factor=2
transaction.state.log.replication.factor=2
transaction.state.log.min.isr=2
log.retention.hours=12
message.max.byte=5242880
default.replication.factor=2
replica.fetch.max.bytes=5242880
log.segment.bytes=1073741824
log.retention.check.interval.ms=300000
zookeeper.connect=zk01:2181,node02.zk02:2181,zk03:2181
zookeeper.connection.timeout.ms=6000
group.initial.rebalance.delay.ms=3000
当停止其中一个节点后,发现数据有部分丢失,原先生产端发送了50条数据,大概有5条左右数据丢失。