返回到文章

采纳

编辑于

ERROR Failed to clean up log for __consumer_offsets-41 in dir D:\kafka\node2\logs due to IOException (kafka.server.LogDirFailureChannel)

kafka

版本kafka2.1 windows

查看kafka tool三个节点的kafka集群现在只剩下一个节点正常,两个在kafka tool没有显示的节点在windows server上还是运行状态,查看两个没有在kafka tool显示的两个节点的日志其中内容:

[2019-12-04 16:00:02,119] ERROR Failed to clean up log for __consumer_offsets-41 in dir D:\kafka\node2\logs due to IOException (kafka.server.LogDirFailureChannel)
java.nio.file.FileSystemException: D:\kafka\node2\logs\__consumer_offsets-41\00000000000000000000.timeindex.cleaned: 
    The process cannot access the file because it is being used by another process.
    at sun.nio.fs.WindowsException.translateToIOException(WindowsException.java:86)
    at sun.nio.fs.WindowsException.rethrowAsIOException(WindowsException.java:97)
    at sun.nio.fs.WindowsException.rethrowAsIOException(WindowsException.java:102)
    at sun.nio.fs.WindowsFileSystemProvider.implDelete(WindowsFileSystemProvider.java:269)
    at sun.nio.fs.AbstractFileSystemProvider.deleteIfExists(AbstractFileSystemProvider.java:108)
    at java.nio.file.Files.deleteIfExists(Files.java:1165)
    at kafka.log.Log$.deleteFileIfExists(Log.scala:2272)
    at kafka.log.LogSegment$.deleteIfExists(LogSegment.scala:644)
    at kafka.log.LogCleaner$.createNewCleanedSegment(LogCleaner.scala:421)
    at kafka.log.Cleaner.cleanSegments(LogCleaner.scala:537)
    at kafka.log.Cleaner$$anonfun$doClean$4.apply(LogCleaner.scala:512)
    at kafka.log.Cleaner$$anonfun$doClean$4.apply(LogCleaner.scala:511)
    at scala.collection.immutable.List.foreach(List.scala:392)
    at kafka.log.Cleaner.doClean(LogCleaner.scala:511)
    at kafka.log.Cleaner.clean(LogCleaner.scala:489)
    at kafka.log.LogCleaner$CleanerThread.cleanLog(LogCleaner.scala:350)
    at kafka.log.LogCleaner$CleanerThread.cleanFilthiestLog(LogCleaner.scala:319)
    at kafka.log.LogCleaner$CleanerThread.doWork(LogCleaner.scala:300)
    at kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:82)