返回到文章

采纳

编辑于 3年前

kafka docker网关转发后的ip无法正常生产消费

kafka docker

1、已有一个

kafka broker 10.10.3.1:9092 
zk 10.10.3.1:2181

2、现需要新增一个端口给其他网络的设备使用

3、路由网关转发对应

10.10.3.1:9888 -> 192.168.1.110:12340

4、新增一个broken 采用docker启动方式

docker run -d --restart on-failure:3 -p 9888:9888 -e KAFKA_BROKER_ID=1 -e KAFKA_ZOOKEEPER_CONNECT=10.10.3.1:2181 -e KAFKA_ADVERTISED_LISTENERS=PLAINTEXT://kafka.com:12340 -e KAFKA_LISTENERS=PLAINTEXT://0.0.0.0:9888 -e AUTO_CREATE_TOPICS=false -t wurstmeister/kafka

5、更改新增broken容器的hosts

10.10.3.1 kafka.com

6、更改客户端hosts

192.168.1.110 kafka.com

7、客户端python访问

from kafka import KafkaProducer
kafka_topic = "test"
kafka_bootstrap_servers = "kafka.com:12340"
kafka_producer = KafkaProducer(bootstrap_servers=kafka_bootstrap_servers)
print("begin send")
print(kafka_producer._metadata)
kafka_producer.send(kafka_topic,value=b"aaaaaaaaaaaaaaaaaaa")

8、显示连通但无法正常获取broken相关信息

begin send
ClusterMetadata(brokers: 0, topics: 0, groups: 0)
Traceback (most recent call last):
  File "kafka_test.py", line 9, in <module>
    kafka_producer.send(kafka_topic,value=b"aaaaaaaaaaaaaaaaaaa")
  File "/usr/local/lib/python3.6/dist-packages/kafka/producer/kafka.py", line 576, in send
    self._wait_on_metadata(topic, self.config['max_block_ms'] / 1000.0)
  File "/usr/local/lib/python3.6/dist-packages/kafka/producer/kafka.py", line 703, in _wait_on_metadata
    "Failed to update metadata after %.1f secs." % (max_wait,))
kafka.errors.KafkaTimeoutError: KafkaTimeoutError: Failed to update metadata after 60.0 secs.

这样新增broken的方式是否不正确???