kafka版本:2.11-0.10.1.0
1、生产的kafka集群会定期被绿盟扫描,本地尝试使用nmap工具扫描kafka端口后立即出现OOM故障,问题原因可能是nmap使用内置的接口访问kafka服务端,该接口提供的参数又都是错误的,感觉可能还是kafka的bug。
2、不修改buffer.memory扫描一次就OOM,增大buffer.memory虽然可以撑多几次,但是也不是解决办法,开启防火墙成本太高,很多安全扫描的东西需要连过来。
nmap -p 9092 -T4 -A -v 172.17.1.6
[2021-08-04 22:00:01,092] ERROR Closing socket for 172.17.1.6:6667-172.17.1.1:47865 because of error (kafka.network.Processor)
org.apache.kafka.common.errors.InvalidRequestException: Error parsing request header. Our best guess of the apiKey is: 27265
Caused by: org.apache.kafka.common.protocol.types.SchemaException: Error reading field 'client_id': Error reading string of length 513, only 103 bytes available
at org.apache.kafka.common.protocol.types.Schema.read(Schema.java:73)
at org.apache.kafka.common.requests.RequestHeader.parse(RequestHeader.java:80)
at kafka.network.RequestChannel$Request.liftedTree1$1(RequestChannel.scala:82)
at kafka.network.RequestChannel$Request.<init>(RequestChannel.scala:82)
at kafka.network.Processor$$anonfun$processCompletedReceives$1.apply(SocketServer.scala:492)
at kafka.network.Processor$$anonfun$processCompletedReceives$1.apply(SocketServer.scala:487)
at scala.collection.Iterator$class.foreach(Iterator.scala:893)
at scala.collection.AbstractIterator.foreach(Iterator.scala:1336)
at scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
at scala.collection.AbstractIterable.foreach(Iterable.scala:54)
at kafka.network.Processor.processCompletedReceives(SocketServer.scala:487)
at kafka.network.Processor.run(SocketServer.scala:417)
at java.lang.Thread.run(Thread.java:748)
[2021-08-04 22:00:01,094] ERROR Closing socket for 172.17.1.6:6667-172.17.1.1:47867 because of error (kafka.network.Processor)
org.apache.kafka.common.errors.InvalidRequestException: Error getting request for apiKey: -173 and apiVersion: 19778
Caused by: java.lang.IllegalArgumentException: Unexpected ApiKeys id `-173`, it should be between `0` and `20` (inclusive)
at org.apache.kafka.common.protocol.ApiKeys.forId(ApiKeys.java:73)
at org.apache.kafka.common.requests.AbstractRequest.getRequest(AbstractRequest.java:39)
at kafka.network.RequestChannel$Request.liftedTree2$1(RequestChannel.scala:96)
at kafka.network.RequestChannel$Request.<init>(RequestChannel.scala:91)
at kafka.network.Processor$$anonfun$processCompletedReceives$1.apply(SocketServer.scala:492)
at kafka.network.Processor$$anonfun$processCompletedReceives$1.apply(SocketServer.scala:487)
at scala.collection.Iterator$class.foreach(Iterator.scala:893)
at scala.collection.AbstractIterator.foreach(Iterator.scala:1336)
at scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
at scala.collection.AbstractIterable.foreach(Iterable.scala:54)
at kafka.network.Processor.processCompletedReceives(SocketServer.scala:487)
at kafka.network.Processor.run(SocketServer.scala:417)
at java.lang.Thread.run(Thread.java:748)
[2021-08-04 22:00:39,516] ERROR Processor got uncaught exception. (kafka.network.Processor)
java.lang.OutOfMemoryError: Direct buffer memory
at java.nio.Bits.reserveMemory(Bits.java:694)
at java.nio.DirectByteBuffer.<init>(DirectByteBuffer.java:123)
at java.nio.ByteBuffer.allocateDirect(ByteBuffer.java:311)
at sun.nio.ch.Util.getTemporaryDirectBuffer(Util.java:241)
at sun.nio.ch.IOUtil.read(IOUtil.java:195)
at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:380)
at org.apache.kafka.common.network.PlaintextTransportLayer.read(PlaintextTransportLayer.java:110)
at org.apache.kafka.common.network.NetworkReceive.readFromReadableChannel(NetworkReceive.java:97)
at org.apache.kafka.common.network.NetworkReceive.readFrom(NetworkReceive.java:71)
at org.apache.kafka.common.network.KafkaChannel.receive(KafkaChannel.java:154)
at org.apache.kafka.common.network.KafkaChannel.read(KafkaChannel.java:135)
at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:343)
at org.apache.kafka.common.network.Selector.poll(Selector.java:291)
at kafka.network.Processor.poll(SocketServer.scala:476)
at kafka.network.Processor.run(SocketServer.scala:416)
at java.lang.Thread.run(Thread.java:748)