Netty project - an event-driven asynchronous network application framework

Related tags

Networking netty
Overview

Build project

Netty Project

Netty is an asynchronous event-driven network application framework for rapid development of maintainable high performance protocol servers & clients.

Links

How to build

For the detailed information about building and developing Netty, please visit the developer guide. This page only gives very basic information.

You require the following to build Netty:

Note that this is build-time requirement. JDK 5 (for 3.x) or 6 (for 4.0+ / 4.1+) is enough to run your Netty-based application.

Branches to look

Development of all versions takes place in each branch whose name is identical to <majorVersion>.<minorVersion>. For example, the development of 3.9 and 4.1 resides in the branch '3.9' and the branch '4.1' respectively.

Usage with JDK 9+

Netty can be used in modular JDK9+ applications as a collection of automatic modules. The module names follow the reverse-DNS style, and are derived from subproject names rather than root packages due to historical reasons. They are listed below:

  • io.netty.all
  • io.netty.buffer
  • io.netty.codec
  • io.netty.codec.dns
  • io.netty.codec.haproxy
  • io.netty.codec.http
  • io.netty.codec.http2
  • io.netty.codec.memcache
  • io.netty.codec.mqtt
  • io.netty.codec.redis
  • io.netty.codec.smtp
  • io.netty.codec.socks
  • io.netty.codec.stomp
  • io.netty.codec.xml
  • io.netty.common
  • io.netty.handler
  • io.netty.handler.proxy
  • io.netty.resolver
  • io.netty.resolver.dns
  • io.netty.transport
  • io.netty.transport.epoll (native omitted - reserved keyword in Java)
  • io.netty.transport.kqueue (native omitted - reserved keyword in Java)
  • io.netty.transport.unix.common (native omitted - reserved keyword in Java)
  • io.netty.transport.rxtx
  • io.netty.transport.sctp
  • io.netty.transport.udt

Automatic modules do not provide any means to declare dependencies, so you need to list each used module separately in your module-info file.

Comments
  • Memory leak in latest netty version.

    Memory leak in latest netty version.

    After recent update to 4.1.7-Final (from 4.1.4-Final) my servers started dying with OOM within few hours. Before they were running for weeks with no issues.

    Error :

    io.netty.util.internal.OutOfDirectMemoryError: failed to allocate 64 byte(s) of direct memory (used: 468189141, max: 468189184)
            at io.netty.util.internal.PlatformDependent.incrementMemoryCounter(PlatformDependent.java:614) ~[server-0.22.0-SNAPSHOT.jar:?]
            at io.netty.util.internal.PlatformDependent.allocateDirectNoCleaner(PlatformDependent.java:568) ~[server-0.22.0-SNAPSHOT.jar:?]
            at io.netty.buffer.UnpooledUnsafeNoCleanerDirectByteBuf.allocateDirect(UnpooledUnsafeNoCleanerDirectByteBuf.java:30) ~[server-0.22.0-SNAPSHOT.jar:?]
            at io.netty.buffer.UnpooledUnsafeDirectByteBuf.<init>(UnpooledUnsafeDirectByteBuf.java:68) ~[server-0.22.0-SNAPSHOT.jar:?]
            at io.netty.buffer.UnpooledUnsafeNoCleanerDirectByteBuf.<init>(UnpooledUnsafeNoCleanerDirectByteBuf.java:25) ~[server-0.22.0-SNAPSHOT.jar:?]
            at io.netty.buffer.UnsafeByteBufUtil.newUnsafeDirectByteBuf(UnsafeByteBufUtil.java:625) ~[server-0.22.0-SNAPSHOT.jar:?]
            at io.netty.buffer.UnpooledByteBufAllocator.newDirectBuffer(UnpooledByteBufAllocator.java:65) ~[server-0.22.0-SNAPSHOT.jar:?]
            at io.netty.buffer.AbstractByteBufAllocator.directBuffer(AbstractByteBufAllocator.java:179) ~[server-0.22.0-SNAPSHOT.jar:?]
            at io.netty.buffer.AbstractByteBufAllocator.directBuffer(AbstractByteBufAllocator.java:170) ~[server-0.22.0-SNAPSHOT.jar:?]
            at io.netty.buffer.AbstractByteBufAllocator.ioBuffer(AbstractByteBufAllocator.java:131) ~[server-0.22.0-SNAPSHOT.jar:?]
            at io.netty.channel.DefaultMaxMessagesRecvByteBufAllocator$MaxMessageHandle.allocate(DefaultMaxMessagesRecvByteBufAllocator.java:73) ~[server-0.22.0-SNAPSHOT.jar:?]
            at io.netty.channel.RecvByteBufAllocator$DelegatingHandle.allocate(RecvByteBufAllocator.java:124) ~[server-0.22.0-SNAPSHOT.jar:?]
            at io.netty.channel.epoll.AbstractEpollStreamChannel$EpollStreamUnsafe.epollInReady(AbstractEpollStreamChannel.java:956) ~[server-0.22.0-SNAPSHOT.jar:?]
            at io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe$1.run(AbstractEpollChannel.java:359) ~[server-0.22.0-SNAPSHOT.jar:?]
            at io.netty.util.concurrent.SingleThreadEventExecutor.safeExecute(SingleThreadEventExecutor.java:451) ~[server-0.22.0-SNAPSHOT.jar:?]
            at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:418) ~[server-0.22.0-SNAPSHOT.jar:?]
            at io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:306) ~[server-0.22.0-SNAPSHOT.jar:?]
            at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:877) ~[server-0.22.0-SNAPSHOT.jar:?]
            at io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:144) ~[server-0.22.0-SNAPSHOT.jar:?]
            at java.lang.Thread.run(Thread.java:745) [?:1.8.0_111]
    

    Or :

    08:28:00.752 WARN  - Failed to mark a promise as failure because it has succeeded already: DefaultChannelPromise@7cd20
    32d(success)io.netty.util.internal.OutOfDirectMemoryError: failed to allocate 18713 byte(s) of direct memory (used: 468184872, max: 468189184)        
    	at io.netty.util.internal.PlatformDependent.incrementMemoryCounter(PlatformDependent.java:631) ~[server-0.21.7-2.jar:?]        
    	at io.netty.util.internal.PlatformDependent.allocateDirectNoCleaner(PlatformDependent.java:585) ~[server-0.21.7-2.jar:?]        
    	at io.netty.buffer.UnpooledUnsafeNoCleanerDirectByteBuf.allocateDirect(UnpooledUnsafeNoCleanerDirectByteBuf.java:30) ~[server-0.21.7-2.jar:?]        
    	at io.netty.buffer.UnpooledUnsafeDirectByteBuf.<init>(UnpooledUnsafeDirectByteBuf.java:68) ~[server-0.21.7-2.jar:?]
            at io.netty.buffer.UnpooledUnsafeNoCleanerDirectByteBuf.<init>(UnpooledUnsafeNoCleanerDirectByteBuf.java:25) ~[server-0.21.7-2.jar:?]
            at io.netty.buffer.UnsafeByteBufUtil.newUnsafeDirectByteBuf(UnsafeByteBufUtil.java:624) ~[server-0.21.7-2.jar:?]
            at io.netty.buffer.UnpooledByteBufAllocator.newDirectBuffer(UnpooledByteBufAllocator.java:65) ~[server-0.21.7-2.jar:?]
            at io.netty.buffer.AbstractByteBufAllocator.directBuffer(AbstractByteBufAllocator.java:179) ~[server-0.21.7-2.jar:?]
            at io.netty.buffer.AbstractByteBufAllocator.directBuffer(AbstractByteBufAllocator.java:170) ~[server-0.21.7-2.jar:?]
            at io.netty.handler.ssl.SslHandler.allocate(SslHandler.java:1533) ~[server-0.21.7-2.jar:?]
            at io.netty.handler.ssl.SslHandler.allocateOutNetBuf(SslHandler.java:1544) ~[server-0.21.7-2.jar:?]
            at io.netty.handler.ssl.SslHandler.wrap(SslHandler.java:575) ~[server-0.21.7-2.jar:?]
            at io.netty.handler.ssl.SslHandler.wrapAndFlush(SslHandler.java:550) ~[server-0.21.7-2.jar:?]
            at io.netty.handler.ssl.SslHandler.flush(SslHandler.java:531) ~[server-0.21.7-2.jar:?]
            at io.netty.handler.ssl.SslHandler.flush(SslHandler.java:1324) ~[server-0.21.7-2.jar:?]
            at io.netty.handler.ssl.SslHandler.closeOutboundAndChannel(SslHandler.java:1307) ~[server-0.21.7-2.jar:?]
            at io.netty.handler.ssl.SslHandler.close(SslHandler.java:498) ~[server-0.21.7-2.jar:?]
            at io.netty.channel.AbstractChannelHandlerContext.invokeClose(AbstractChannelHandlerContext.java:625) ~[server-0.21.7-2.jar:?]
            at io.netty.channel.AbstractChannelHandlerContext.close(AbstractChannelHandlerContext.java:609) ~[server-0.21.7-2.jar:?]
            at io.netty.channel.CombinedChannelDuplexHandler$DelegatingChannelHandlerContext.close(CombinedChannelDuplexHandler.java:504) ~[server-0.21.7-2.jar:?]
            at io.netty.channel.ChannelOutboundHandlerAdapter.close(ChannelOutboundHandlerAdapter.java:71) ~[server-0.21.7-2.jar:?]
            at io.netty.channel.CombinedChannelDuplexHandler.close(CombinedChannelDuplexHandler.java:315) ~[server-0.21.7-2.jar:?]
            at io.netty.channel.AbstractChannelHandlerContext.invokeClose(AbstractChannelHandlerContext.java:625) ~[server-0.21.7-2.jar:?]
            at io.netty.channel.AbstractChannelHandlerContext.close(AbstractChannelHandlerContext.java:609) ~[server-0.21.7-2.jar:?]
            at io.netty.channel.ChannelDuplexHandler.close(ChannelDuplexHandler.java:73) ~[server-0.21.7-2.jar:?]
            at io.netty.channel.AbstractChannelHandlerContext.invokeClose(AbstractChannelHandlerContext.java:625) ~[server-0.21.7-2.jar:?]
            at io.netty.channel.AbstractChannelHandlerContext.close(AbstractChannelHandlerContext.java:609) ~[server-0.21.7-2.jar:?]
            at io.netty.channel.AbstractChannelHandlerContext.close(AbstractChannelHandlerContext.java:466) ~[server-0.21.7-2.jar:?]
            at cc.blynk.server.core.protocol.handlers.DefaultExceptionHandler.handleUnexpectedException(DefaultExceptionHandler.java:59) ~[server-0.21.7-2.jar:?]
            at cc.blynk.server.core.protocol.handlers.DefaultExceptionHandler.handleGeneralException(DefaultExceptionHandler.java:43) ~[server-0.21.7-2.jar:?]
            at cc.blynk.core.http.handlers.StaticFileHandler.exceptionCaught(StaticFileHandler.java:277) ~[server-0.21.7-2.jar:?]
            at io.netty.channel.AbstractChannelHandlerContext.invokeExceptionCaught(AbstractChannelHandlerContext.java:286) ~[server-0.21.7-2.jar:?]
            at io.netty.channel.AbstractChannelHandlerContext.invokeExceptionCaught(AbstractChannelHandlerContext.java:265) ~[server-0.21.7-2.jar:?]
            at io.netty.channel.AbstractChannelHandlerContext.fireExceptionCaught(AbstractChannelHandlerContext.java:257) ~[server-0.21.7-2.jar:?]
            at io.netty.channel.ChannelInboundHandlerAdapter.exceptionCaught(ChannelInboundHandlerAdapter.java:131) ~[server-0.21.7-2.jar:?]
            at io.netty.channel.AbstractChannelHandlerContext.invokeExceptionCaught(AbstractChannelHandlerContext.java:286) ~[server-0.21.7-2.jar:?]
            at io.netty.channel.AbstractChannelHandlerContext.invokeExceptionCaught(AbstractChannelHandlerContext.java:265) ~[server-0.21.7-2.jar:?]
            at io.netty.channel.AbstractChannelHandlerContext.fireExceptionCaught(AbstractChannelHandlerContext.java:257) ~[server-0.21.7-2.jar:?]
            at io.netty.channel.ChannelInboundHandlerAdapter.exceptionCaught(ChannelInboundHandlerAdapter.java:131) ~[server-0.21.7-2.jar:?]
            at io.netty.channel.AbstractChannelHandlerContext.invokeExceptionCaught(AbstractChannelHandlerContext.java:286) ~[server-0.21.7-2.jar:?]
            at io.netty.channel.AbstractChannelHandlerContext.invokeExceptionCaught(AbstractChannelHandlerContext.java:265) ~[server-0.21.7-2.jar:?]
            at io.netty.channel.AbstractChannelHandlerContext.fireExceptionCaught(AbstractChannelHandlerContext.java:257) ~[server-0.21.7-2.jar:?]
            at io.netty.channel.ChannelInboundHandlerAdapter.exceptionCaught(ChannelInboundHandlerAdapter.java:131) ~[server-0.21.7-2.jar:?]
            at io.netty.channel.AbstractChannelHandlerContext.invokeExceptionCaught(AbstractChannelHandlerContext.java:286) ~[server-0.21.7-2.jar:?]
            at io.netty.channel.AbstractChannelHandlerContext.invokeExceptionCaught(AbstractChannelHandlerContext.java:265) ~[server-0.21.7-2.jar:?]
            at io.netty.channel.AbstractChannelHandlerContext.fireExceptionCaught(AbstractChannelHandlerContext.java:257) ~[server-0.21.7-2.jar:?]
            at io.netty.channel.ChannelInboundHandlerAdapter.exceptionCaught(ChannelInboundHandlerAdapter.java:131) ~[server-0.21.7-2.jar:?]
            at io.netty.channel.AbstractChannelHandlerContext.invokeExceptionCaught(AbstractChannelHandlerContext.java:286) ~[server-0.21.7-2.jar:?]
            at io.netty.channel.AbstractChannelHandlerContext.invokeExceptionCaught(AbstractChannelHandlerContext.java:265) ~[server-0.21.7-2.jar:?]
            at io.netty.channel.AbstractChannelHandlerContext.fireExceptionCaught(AbstractChannelHandlerContext.java:257) ~[server-0.21.7-2.jar:?]
            at io.netty.channel.ChannelInboundHandlerAdapter.exceptionCaught(ChannelInboundHandlerAdapter.java:131) ~[server-0.21.7-2.jar:?]
            at io.netty.channel.AbstractChannelHandlerContext.invokeExceptionCaught(AbstractChannelHandlerContext.java:286) ~[server-0.21.7-2.jar:?]
            at io.netty.channel.AbstractChannelHandlerContext.invokeExceptionCaught(AbstractChannelHandlerContext.java:265) ~[server-0.21.7-2.jar:?]
            at io.netty.channel.AbstractChannelHandlerContext.fireExceptionCaught(AbstractChannelHandlerContext.java:257) ~[server-0.21.7-2.jar:?]
            at io.netty.channel.CombinedChannelDuplexHandler$DelegatingChannelHandlerContext.fireExceptionCaught(CombinedC:
    08:28:00.752 WARN  - Failed to mark a promise as failure because it has succeeded already: DefaultChannelPromise@7cd20
    32d(success)io.netty.util.internal.OutOfDirectMemoryError: failed to allocate 18713 byte(s) of direct memory (used: 468184872, max: 468189184)        
    	at io.netty.util.internal.PlatformDependent.incrementMemoryCounter(PlatformDependent.java:631) ~[server-0.21.7-2.jar:?]        
    	at io.netty.util.internal.PlatformDependent.allocateDirectNoCleaner(PlatformDependent.java:585) ~[server-0.21.7-2.jar:?]        
    	at io.netty.buffer.UnpooledUnsafeNoCleanerDirectByteBuf.allocateDirect(UnpooledUnsafeNoCleanerDirectByteBuf.java:30) ~[server-0.21.7-2.jar:?]        
    	at io.netty.buffer.UnpooledUnsafeDirectByteBuf.<init>(UnpooledUnsafeDirectByteBuf.java:68) ~[server-0.21.7-2.jar:?]
            at io.netty.buffer.UnpooledUnsafeNoCleanerDirectByteBuf.<init>(UnpooledUnsafeNoCleanerDirectByteBuf.java:25) ~[server-0.21.7-2.jar:?]
            at io.netty.buffer.UnsafeByteBufUtil.newUnsafeDirectByteBuf(UnsafeByteBufUtil.java:624) ~[server-0.21.7-2.jar:?]
            at io.netty.buffer.UnpooledByteBufAllocator.newDirectBuffer(UnpooledByteBufAllocator.java:65) ~[server-0.21.7-2.jar:?]
            at io.netty.buffer.AbstractByteBufAllocator.directBuffer(AbstractByteBufAllocator.java:179) ~[server-0.21.7-2.jar:?]
            at io.netty.buffer.AbstractByteBufAllocator.directBuffer(AbstractByteBufAllocator.java:170) ~[server-0.21.7-2.jar:?]
            at io.netty.handler.ssl.SslHandler.allocate(SslHandler.java:1533) ~[server-0.21.7-2.jar:?]
            at io.netty.handler.ssl.SslHandler.allocateOutNetBuf(SslHandler.java:1544) ~[server-0.21.7-2.jar:?]
            at io.netty.handler.ssl.SslHandler.wrap(SslHandler.java:575) ~[server-0.21.7-2.jar:?]
            at io.netty.handler.ssl.SslHandler.wrapAndFlush(SslHandler.java:550) ~[server-0.21.7-2.jar:?]
            at io.netty.handler.ssl.SslHandler.flush(SslHandler.java:531) ~[server-0.21.7-2.jar:?]
            at io.netty.handler.ssl.SslHandler.flush(SslHandler.java:1324) ~[server-0.21.7-2.jar:?]
            at io.netty.handler.ssl.SslHandler.closeOutboundAndChannel(SslHandler.java:1307) ~[server-0.21.7-2.jar:?]
            at io.netty.handler.ssl.SslHandler.close(SslHandler.java:498) ~[server-0.21.7-2.jar:?]
            at io.netty.channel.AbstractChannelHandlerContext.invokeClose(AbstractChannelHandlerContext.java:625) ~[server-0.21.7-2.jar:?]
            at io.netty.channel.AbstractChannelHandlerContext.close(AbstractChannelHandlerContext.java:609) ~[server-0.21.7-2.jar:?]
            at io.netty.channel.CombinedChannelDuplexHandler$DelegatingChannelHandlerContext.close(CombinedChannelDuplexHandler.java:504) ~[server-0.21.7-2.jar:?]
            at io.netty.channel.ChannelOutboundHandlerAdapter.close(ChannelOutboundHandlerAdapter.java:71) ~[server-0.21.7-2.jar:?]
            at io.netty.channel.CombinedChannelDuplexHandler.close(CombinedChannelDuplexHandler.java:315) ~[server-0.21.7-2.jar:?]
            at io.netty.channel.AbstractChannelHandlerContext.invokeClose(AbstractChannelHandlerContext.java:625) ~[server-0.21.7-2.jar:?]
            at io.netty.channel.AbstractChannelHandlerContext.close(AbstractChannelHandlerContext.java:609) ~[server-0.21.7-2.jar:?]
            at io.netty.channel.ChannelDuplexHandler.close(ChannelDuplexHandler.java:73) ~[server-0.21.7-2.jar:?]
            at io.netty.channel.AbstractChannelHandlerContext.invokeClose(AbstractChannelHandlerContext.java:625) ~[server-0.21.7-2.jar:?]
            at io.netty.channel.AbstractChannelHandlerContext.close(AbstractChannelHandlerContext.java:609) ~[server-0.21.7-2.jar:?]
            at io.netty.channel.AbstractChannelHandlerContext.close(AbstractChannelHandlerContext.java:466) ~[server-0.21.7-2.jar:?]
            at cc.blynk.server.core.protocol.handlers.DefaultExceptionHandler.handleUnexpectedException(DefaultExceptionHandler.java:59) ~[server-0.21.7-2.jar:?]
            at cc.blynk.server.core.protocol.handlers.DefaultExceptionHandler.handleGeneralException(DefaultExceptionHandler.java:43) ~[server-0.21.7-2.jar:?]
            at cc.blynk.core.http.handlers.StaticFileHandler.exceptionCaught(StaticFileHandler.java:277) ~[server-0.21.7-2.jar:?]
            at io.netty.channel.AbstractChannelHandlerContext.invokeExceptionCaught(AbstractChannelHandlerContext.java:286) ~[server-0.21.7-2.jar:?]
            at io.netty.channel.AbstractChannelHandlerContext.invokeExceptionCaught(AbstractChannelHandlerContext.java:265) ~[server-0.21.7-2.jar:?]
            at io.netty.channel.AbstractChannelHandlerContext.fireExceptionCaught(AbstractChannelHandlerContext.java:257) ~[server-0.21.7-2.jar:?]
            at io.netty.channel.ChannelInboundHandlerAdapter.exceptionCaught(ChannelInboundHandlerAdapter.java:131) ~[server-0.21.7-2.jar:?]
            at io.netty.channel.AbstractChannelHandlerContext.invokeExceptionCaught(AbstractChannelHandlerContext.java:286) ~[server-0.21.7-2.jar:?]
            at io.netty.channel.AbstractChannelHandlerContext.invokeExceptionCaught(AbstractChannelHandlerContext.java:265) ~[server-0.21.7-2.jar:?]
            at io.netty.channel.AbstractChannelHandlerContext.fireExceptionCaught(AbstractChannelHandlerContext.java:257) ~[server-0.21.7-2.jar:?]
            at io.netty.channel.ChannelInboundHandlerAdapter.exceptionCaught(ChannelInboundHandlerAdapter.java:131) ~[server-0.21.7-2.jar:?]
            at io.netty.channel.AbstractChannelHandlerContext.invokeExceptionCaught(AbstractChannelHandlerContext.java:286) ~[server-0.21.7-2.jar:?]
            at io.netty.channel.AbstractChannelHandlerContext.invokeExceptionCaught(AbstractChannelHandlerContext.java:265) ~[server-0.21.7-2.jar:?]
            at io.netty.channel.AbstractChannelHandlerContext.fireExceptionCaught(AbstractChannelHandlerContext.java:257) ~[server-0.21.7-2.jar:?]
            at io.netty.channel.ChannelInboundHandlerAdapter.exceptionCaught(ChannelInboundHandlerAdapter.java:131) ~[server-0.21.7-2.jar:?]
            at io.netty.channel.AbstractChannelHandlerContext.invokeExceptionCaught(AbstractChannelHandlerContext.java:286) ~[server-0.21.7-2.jar:?]
            at io.netty.channel.AbstractChannelHandlerContext.invokeExceptionCaught(AbstractChannelHandlerContext.java:265) ~[server-0.21.7-2.jar:?]
            at io.netty.channel.AbstractChannelHandlerContext.fireExceptionCaught(AbstractChannelHandlerContext.java:257) ~[server-0.21.7-2.jar:?]
            at io.netty.channel.ChannelInboundHandlerAdapter.exceptionCaught(ChannelInboundHandlerAdapter.java:131) ~[server-0.21.7-2.jar:?]
            at io.netty.channel.AbstractChannelHandlerContext.invokeExceptionCaught(AbstractChannelHandlerContext.java:286) ~[server-0.21.7-2.jar:?]
            at io.netty.channel.AbstractChannelHandlerContext.invokeExceptionCaught(AbstractChannelHandlerContext.java:265) ~[server-0.21.7-2.jar:?]
            at io.netty.channel.AbstractChannelHandlerContext.fireExceptionCaught(AbstractChannelHandlerContext.java:257) ~[server-0.21.7-2.jar:?]
            at io.netty.channel.ChannelInboundHandlerAdapter.exceptionCaught(ChannelInboundHandlerAdapter.java:131) ~[server-0.21.7-2.jar:?]
            at io.netty.channel.AbstractChannelHandlerContext.invokeExceptionCaught(AbstractChannelHandlerContext.java:286) ~[server-0.21.7-2.jar:?]
            at io.netty.channel.AbstractChannelHandlerContext.invokeExceptionCaught(AbstractChannelHandlerContext.java:265) ~[server-0.21.7-2.jar:?]
            at io.netty.channel.AbstractChannelHandlerContext.fireExceptionCaught(AbstractChannelHandlerContext.java:257) ~[server-0.21.7-2.jar:?]
            at io.netty.channel.CombinedChannelDuplexHandler$DelegatingChannelHandlerContext.fireExceptionCaught(CombinedC:
    

    I did restart and made heap dump before abnormal memory consumption and after first error messages from above :

    memory

    This screenshot shows difference between heap after server start (takes 17% of RAM of Instance) and first OOM in logs (takes 31% of RAM of instance). Instance RAM is 2 GB. So look like all direct memory was consumed (468MB) while heap itself takes less than direct buffers. Load on server is pretty low - 900 req/sec, with ~600 active connections. CPU consumption is only ~15%.

    I tried to analyze heap dump but I don't know netty well in order to make any conclusions.

    java version "1.8.0_111"
    Java(TM) SE Runtime Environment (build 1.8.0_111-b14)
    Java HotSpot(TM) 64-Bit Server VM (build 25.111-b14, mixed mode)
    
            <netty.version>4.1.7.Final</netty.version>
            <netty.tcnative.version>1.1.33.Fork25</netty.tcnative.version>
    
            <dependency>
                <groupId>io.netty</groupId>
                <artifactId>netty-transport-native-epoll</artifactId>
                <version>${netty.version}</version>
                <classifier>${epoll.os}</classifier>
            </dependency>
            <dependency>
                <groupId>io.netty</groupId>
                <artifactId>netty-tcnative</artifactId>
                <version>${netty.tcnative.version}</version>
                <classifier>${epoll.os}</classifier>
            </dependency>
    

    Right now I'm playing with

    -Dio.netty.leakDetectionLevel=advanced 
    -Dio.netty.noPreferDirect=true 
    -Dio.netty.allocator.type=unpooled 
    -Dio.netty.maxDirectMemory=0
    

    to find out working settings. I'll update ticket with additional info if any.

    Unfortunately I wasn't able to reproduce this issue on QA env. Please let me know if you need more info.

    defect 
    opened by doom369 123
  • DNS Codec

    DNS Codec

    This is the codec for a DNS resolver. I also wrote a basic test program "DNSTest" that will use the codec to resolve an address. This was part of a GSoC proposal for an asynch DNS resolver (I threw in an application).

    opened by mbakkar 107
  • ForkJoinPool-based EventLoopGroup and Channel Deregistration

    ForkJoinPool-based EventLoopGroup and Channel Deregistration

    Hey,

    this is a first version of Netty running on a ForkJoinPool and deregister (hopefully) working correctly. The main idea behind the changes in deregister is that deregister is always executed as a task and never invoked directly. We also do not allow any new task submissions after deregister was called, making the deregistration task the last task of a particular Channel in the task queue and we thus do not have to worry about moving tasks between EventLoops. We achieve this mainly by wrapping all calls to .eventLoop() and .executor(). There is some special treatment required for scheduled tasks, but this is best explained in the code.

    Let me know what you guys think.

    @normanmaurer @trustin

    feature 
    opened by buchgr 96
  • DNS resolver failing to find valid DNS record

    DNS resolver failing to find valid DNS record

    Expected behavior

    The DNS resolver should find valid DNS records.

    Actual behavior

    Exception thrown:

    Caused by: io.netty.resolver.dns.DnsNameResolverContext$SearchDomainUnknownHostException: Search domain query failed. Original hostname: 'host.toplevel' failed to resolve 'host.toplevel.search.domain' after 7 queries 
    	at io.netty.resolver.dns.DnsNameResolverContext.finishResolve(DnsNameResolverContext.java:721)
    	at io.netty.resolver.dns.DnsNameResolverContext.tryToFinishResolve(DnsNameResolverContext.java:663)
    	at io.netty.resolver.dns.DnsNameResolverContext.query(DnsNameResolverContext.java:306)
    	at io.netty.resolver.dns.DnsNameResolverContext.query(DnsNameResolverContext.java:295)
    	at io.netty.resolver.dns.DnsNameResolverContext.tryToFinishResolve(DnsNameResolverContext.java:636)
    	at io.netty.resolver.dns.DnsNameResolverContext$3.operationComplete(DnsNameResolverContext.java:342)
    	at io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:507)
    	at io.netty.util.concurrent.DefaultPromise.notifyListenersNow(DefaultPromise.java:481)
    	at io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:420)
    	at io.netty.util.concurrent.DefaultPromise.trySuccess(DefaultPromise.java:104)
    	at io.netty.resolver.dns.DnsQueryContext.setSuccess(DnsQueryContext.java:197)
    	at io.netty.resolver.dns.DnsQueryContext.finish(DnsQueryContext.java:180)
    	at io.netty.resolver.dns.DnsNameResolver$DnsResponseHandler.channelRead(DnsNameResolver.java:969)
    	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
    	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
    	at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)
    	at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:102)
    	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
    	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
    	at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)
    	at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1412)
    	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
    	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
    	at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:943)
    	at io.netty.channel.nio.AbstractNioMessageChannel$NioMessageUnsafe.read(AbstractNioMessageChannel.java:93)
    	at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:645)
    	at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:580)
    	at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:497)
    	at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:459)
    	at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:886)
    	at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
    	at java.lang.Thread.run(Thread.java:748)
    

    Steps to reproduce

    1. Configure a top level domain someDomain on a DNS server you own
    2. Configure a host under the new top level domain someHost.someDomain
    3. Configure multiple resolvers on the DNS client machine that will run the Netty code. i.e. 8.8.8.8, 192.168.1.1, and 10.0.0.1 (I have 3 resolvers configured, each pointing to different DNS masters - global DNS, local personal private network, company private network over a VPN)
    4. Configure the search domain to not match the top level domain, i.e. search.otherDomain on the DNS client machine that will run the Netty code
    5. Ask netty to resolve someHost.someDomain
    6. failure.

    Minimal yet complete reproducer code (or URL to code)

    I'm not using Netty directly so I'm not sure what to put here. Do you want my Redisson code?

    Netty version

    Breaks when I upgrade to Reddison 3.6+ which pulls in Netty 4.1.20+ When forcing downgrade to Netty 4.1.13 the problem still shows, but with a slightly different stack trace.

    JVM version (e.g. java -version)

    java version "1.8.0_162" Java(TM) SE Runtime Environment (build 1.8.0_162-b12) Java HotSpot(TM) 64-Bit Server VM (build 25.162-b12, mixed mode)

    OS version (e.g. uname -a)

    Windows 10, Centos 7, Ubuntu 16.04

    defect 
    opened by johnjaylward 93
  • ALPN / NPN need to handle no compatible protocols found

    ALPN / NPN need to handle no compatible protocols found

    Motivation: If there are no common protocols in the ALPN protocol exchange we still compete the handshake successfully. This handshake should fail according to http://tools.ietf.org/html/rfc7301#section-3.2 with a status of no_application_protocol.

    Modifications: -Upstream project used for ALPN (alpn-boot) does not support this. So a PR https://github.com/jetty-project/jetty-alpn/pull/3 was submitted. -The netty code using alpn-boot should support the new interface (return null on existing method). -Version number of alpn-boot must be updated in pom.xml files

    Result: -Netty fails the SSL handshake if ALPN is used and there are no common protocols.

    defect 
    opened by Scottmitch 85
  • EPollArrayWrapper.epollWait 100% CPU Usage

    EPollArrayWrapper.epollWait 100% CPU Usage

    Hi,

    I believe I have an issue similar to #302 but on Linux (Ubuntu 10.04) with JDK (1.6.0u30) and JDK(1.7.0u4) using Netty-4.0.0 (Revision: 52a7d28cb59e3806fda322aecf7a85a6adaeb305)

    The app is proxying connections to backend systems. The proxy has a pool of channels that it can use to send requests to the backend systems. If the pool is low on channels, new channels are spawned and put into the pool so that requests sent to the proxy can be serviced. The pools get populated on app startup, so that is why it doesn't take long at all for the CPU to spike through the roof (22 seconds into the app lifecycle).

    The test box has two CPUs, the output from 'top' is below:

    PID  USER   PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  COMMAND
    8220 root   20   0 2281m 741m  10m R 50.2 18.7  0:22.57 java                                                                             
    8218 root   20   0 2281m 741m  10m R 49.9 18.7  0:22.65 java                                                                             
    8219 root   20   0 2281m 741m  10m R 49.2 18.7  0:22.86 java                                                                             
    8221 root   20   0 2281m 741m  10m R 49.2 18.7  0:22.20 java 
    

    Thread Dump for the four NioClient based Worker Threads that are chewing up all the CPU.

    "backend-worker-pool-7-thread-1" prio=10 tid=0x00007f5918015800 nid=0x201a runnable [0x00007f5924ba3000]
       java.lang.Thread.State: RUNNABLE
        at sun.nio.ch.EPollArrayWrapper.epollWait(Native Method)
        at sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:210)
        at sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:65)
        at sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:69)
        - locked <0x000000008be93580> (a sun.nio.ch.Util$2)
        - locked <0x000000008be93570> (a java.util.Collections$UnmodifiableSet)
        - locked <0x000000008be92548> (a sun.nio.ch.EPollSelectorImpl)
        at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:80)
        at io.netty.channel.socket.nio.SelectorUtil.select(SelectorUtil.java:55)
        at io.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:261)
        at io.netty.channel.socket.nio.NioWorker.run(NioWorker.java:37)
        at io.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:43)
        at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
        at java.lang.Thread.run(Thread.java:662)   Locked ownable synchronizers:    - <0x000000008be00748> (a java.util.concurrent.locks.ReentrantLock$NonfairSync)
    
    "backend-worker-pool-7-thread-2" prio=10 tid=0x00007f5918012000 nid=0x201b runnable [0x00007f5924b82000]
       java.lang.Thread.State: RUNNABLE
        at sun.nio.ch.EPollArrayWrapper.epollWait(Native Method)
        at sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:210)
        at sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:65)
        at sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:69)
        - locked <0x000000008be94a28> (a sun.nio.ch.Util$2)
        - locked <0x000000008be94a18> (a java.util.Collections$UnmodifiableSet)
        - locked <0x000000008be90648> (a sun.nio.ch.EPollSelectorImpl)
        at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:80)
        at io.netty.channel.socket.nio.SelectorUtil.select(SelectorUtil.java:55)
        at io.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:261)
        at io.netty.channel.socket.nio.NioWorker.run(NioWorker.java:37)
        at io.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:43)
        at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
        at java.lang.Thread.run(Thread.java:662)   Locked ownable synchronizers:    - <0x000000008be904c8> (a java.util.concurrent.locks.ReentrantLock$NonfairSync)
    
    "backend-worker-pool-7-thread-3" prio=10 tid=0x00007f5918007800 nid=0x201c runnable [0x00007f5924b61000]
       java.lang.Thread.State: RUNNABLE
        at sun.nio.ch.EPollArrayWrapper.epollWait(Native Method)
        at sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:210)
        at sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:65)
        at sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:69)
        - locked <0x000000008be952e0> (a sun.nio.ch.Util$2)
        - locked <0x000000008be952d0> (a java.util.Collections$UnmodifiableSet)
        - locked <0x000000008be8f858> (a sun.nio.ch.EPollSelectorImpl)
        at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:80)
        at io.netty.channel.socket.nio.SelectorUtil.select(SelectorUtil.java:55)
        at io.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:261)
        at io.netty.channel.socket.nio.NioWorker.run(NioWorker.java:37)
        at io.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:43)
        at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
        at java.lang.Thread.run(Thread.java:662)   Locked ownable synchronizers:    - <0x000000008be8f618> (a java.util.concurrent.locks.ReentrantLock$NonfairSync)
    
    "backend-worker-pool-7-thread-4" prio=10 tid=0x00007f5918019000 nid=0x201d runnable [0x00007f5924b40000]
       java.lang.Thread.State: RUNNABLE
        at sun.nio.ch.EPollArrayWrapper.epollWait(Native Method)
        at sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:210)
        at sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:65)
        at sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:69)
        - locked <0x000000008be003f8> (a sun.nio.ch.Util$2)
        - locked <0x000000008be003e8> (a java.util.Collections$UnmodifiableSet)
        - locked <0x000000008be00408> (a sun.nio.ch.EPollSelectorImpl)
        at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:80)
        at io.netty.channel.socket.nio.SelectorUtil.select(SelectorUtil.java:55)
        at io.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:261)
        at io.netty.channel.socket.nio.NioWorker.run(NioWorker.java:37)
        at io.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:43)
        at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
        at java.lang.Thread.run(Thread.java:662)   Locked ownable synchronizers:    - <0x000000008be004e0> (a java.util.concurrent.locks.ReentrantLock$NonfairSync)
    
    defect 
    opened by blucas 76
  • (POC) Refactor FileRegion to implement new ReadableObject API

    (POC) Refactor FileRegion to implement new ReadableObject API

    Motivation:

    Based on #3965. As a first step of integrating the unified reading API, the low-hanging fruit is to refactor FIleRegion to use it.

    Modifications:

    Refactored FileRegion to extend ReadableObject and implemented the new interface in DefaultFileRegion.

    Result:

    FileRegion implements the new ReadableObject interface.

    opened by nmittler 72
  • Draft - io_uring - GSoC 2020

    Draft - io_uring - GSoC 2020

    This draft can be build on Linux Kernel 5.9-rc1 and Linux Kernel 5.8.3 I came up with some ideas on how to implement it, but I'm not sure what is the right way, so feel free to comment.

    I created an event Hashmap to keep track of what kind of events are coming off the completion queue, that means you have to save the eventId to the submission queue->user_data to identify events

    Write

    • io_uring events are not in order by default but write events in netty should be in order, there is a flag for that IOSQE_IO_LINK (related to one channel)
    • This function doWrite(ChannelOutboundBuffer in) writes until readableBytes is 0, so that means you have to store Bytebuf somewhere

    Accept

    • you need the address of the peer socket to create a new ChildChannel. One solution would be to save the filedescriptor in the event ServerChannel because acceptedAddress argument is saved in AbstractIOUringServerChannel to call serverChannel.createNewChildChannel
    • My idea is whenever accept event is executed, a new accept event is started in EventLoop

    Read

    • I'm wondering how to get the pipeline instance to fireChannelRead(ByteBuf) in EventLoop and have to save Bytebuf(as mentioned above) or is it possible to get the same Bytebuf reference from ByteAllocatorHandle?
    • as discussed above, save the file descriptor in Event and then invoke pipeline.fireChannelRead, WDYT?
    • How often isChannel.read called or doBeginRead called?

    what's the difference between ByteBufUtil.threadLocalDirectBuffer and isDirectBufferPooled?

    what about naming? IOUringSocketChannel, IO_UringSocketChannel or UringSocketChannel WDYT?

    #10142

    opened by 1Jo1 70
  • Direct memory exhausted after cant recycle any FastThreadLocalThreads

    Direct memory exhausted after cant recycle any FastThreadLocalThreads

    Version:netty-all-4.0.25.Final Using:PooledByteBufAllocator I have encounter a problem that direct memory keeps rising by increasing 16M per time when load average go up till direct memory having 2G uplimit is exhausted ,and crashed. I find there are mass of FastThreadLocalThreads and io.netty.util.Recycler$DefaultHandle my code configed as follows:

    code

    chart4 f0 u hb nmzhp62ht b b75

    chart6

    what's wrong with my using netty? can any one help me?

    needs info 
    opened by jiangguilong2000 70
  • Optimize PoolChunk to not use DFS

    Optimize PoolChunk to not use DFS

    I received this from @pavanka who did some optimizations for the algorithm used in PoolChunk to allow faster allocations of buffers > pageSizes. The improvement is only notable if no cache is used (they can not use the cache as they allocate from a lot of different threads).

    Here are the details of the algorithm (written down by @pavanka ):

    Algorithm for poolchunk allocation
    
    allocateRun
    We store the complete balanced binary tree in an array (just like heaps)
    
    this is how the tree looks like, with the size of each node being mentioned
    depth=0  chunkSize (1 node)
    depth=1  chunkSize/2 (2 nodes)
    depth=h  chunkSize/2^h (2^h nodes)
    
    if we want to allocate a bytebuf of size chunkSize/2^k all we have to do is find the first
    node (from left) at height k that is unused
    
    This is how we do it:
    1) When we construct the memoryMap array in the beginning we store the depth of a node in each node
    memoryMap[id] = x
    => in the subtree rooted at id, the smallest depth which has a unused node
    
    In the beginning, x = depth for that id because we can potentially allocate all of it
    As we allocate & free nodes, we update these numbers so that the property is maintained
    
    if memoryMap[id] = depth => it is unallocated
    memoryMap[id] > depth => it is a branch page [at least one of its child nodes is allocated]
    memoryMap[id] = maxOrder + 1 => the node is fully allocated & none of its children can be allocated
    
    allocateRun(h) => we want to find the first node (from left) at height h that can be allocated
    Algorithm:
    1) start at root
    if value > h => cannot be allocated from this chunk
    2) if left node value <= h; we can allocate from left subtree so move to left and repeat until found
        else try in right subtree
    
    
    For allocateSubpage
    All subpages allocated are stored in a map at key = elemSize
    Algorithm:
    1) if subpage at elemSize != null: try allocating from it.
    If it fails: allocateSubpageSimple
    2) else : just allocateSubpageSimple
    
    allocateSubpageSimple
    1) use allocateRun(maxOrder) to find an empty (i.e., unused) leaf (i.e., page)
    use this handle to construct the poolsubpage object or if it already exists just initialize it
    with required normCapacity
    

    And here is the microbenchmark:

    master:

    # Run complete. Total time: 00:29:58
    
    Benchmark                                                           (size)   Mode   Samples        Score  Score error    Units
    i.n.m.b.ByteBufAllocatorBenchmark.defaultPooledDirectAllocAndFree    00000  thrpt        20     9824.753      440.050   ops/ms
    i.n.m.b.ByteBufAllocatorBenchmark.defaultPooledDirectAllocAndFree    00256  thrpt        20    10917.065      129.038   ops/ms
    i.n.m.b.ByteBufAllocatorBenchmark.defaultPooledDirectAllocAndFree    01024  thrpt        20    10549.387       93.710   ops/ms
    i.n.m.b.ByteBufAllocatorBenchmark.defaultPooledDirectAllocAndFree    04096  thrpt        20    10336.658       54.765   ops/ms
    i.n.m.b.ByteBufAllocatorBenchmark.defaultPooledDirectAllocAndFree    16384  thrpt        20    11049.749      320.725   ops/ms
    i.n.m.b.ByteBufAllocatorBenchmark.defaultPooledDirectAllocAndFree    65536  thrpt        20    10475.519      351.349   ops/ms
    i.n.m.b.ByteBufAllocatorBenchmark.defaultPooledHeapAllocAndFree      00000  thrpt        20    10432.484      399.464   ops/ms
    i.n.m.b.ByteBufAllocatorBenchmark.defaultPooledHeapAllocAndFree      00256  thrpt        20    10229.098       76.816   ops/ms
    i.n.m.b.ByteBufAllocatorBenchmark.defaultPooledHeapAllocAndFree      01024  thrpt        20    10429.187      268.470   ops/ms
    i.n.m.b.ByteBufAllocatorBenchmark.defaultPooledHeapAllocAndFree      04096  thrpt        20     9840.171      272.099   ops/ms
    i.n.m.b.ByteBufAllocatorBenchmark.defaultPooledHeapAllocAndFree      16384  thrpt        20    10951.141      277.530   ops/ms
    i.n.m.b.ByteBufAllocatorBenchmark.defaultPooledHeapAllocAndFree      65536  thrpt        20    11091.859       58.675   ops/ms
    i.n.m.b.ByteBufAllocatorBenchmark.pooledDirectAllocAndFree           00000  thrpt        20     6857.392      397.553   ops/ms
    i.n.m.b.ByteBufAllocatorBenchmark.pooledDirectAllocAndFree           00256  thrpt        20     6326.836       79.369   ops/ms
    i.n.m.b.ByteBufAllocatorBenchmark.pooledDirectAllocAndFree           01024  thrpt        20     5793.289       98.991   ops/ms
    i.n.m.b.ByteBufAllocatorBenchmark.pooledDirectAllocAndFree           04096  thrpt        20     5980.313      129.175   ops/ms
    i.n.m.b.ByteBufAllocatorBenchmark.pooledDirectAllocAndFree           16384  thrpt        20      455.153       11.252   ops/ms
    i.n.m.b.ByteBufAllocatorBenchmark.pooledDirectAllocAndFree           65536  thrpt        20      962.117        4.532   ops/ms
    i.n.m.b.ByteBufAllocatorBenchmark.pooledHeapAllocAndFree             00000  thrpt        20     7835.567       99.523   ops/ms
    i.n.m.b.ByteBufAllocatorBenchmark.pooledHeapAllocAndFree             00256  thrpt        20     6678.297       31.975   ops/ms
    i.n.m.b.ByteBufAllocatorBenchmark.pooledHeapAllocAndFree             01024  thrpt        20     6208.095       89.338   ops/ms
    i.n.m.b.ByteBufAllocatorBenchmark.pooledHeapAllocAndFree             04096  thrpt        20     6172.657      123.379   ops/ms
    i.n.m.b.ByteBufAllocatorBenchmark.pooledHeapAllocAndFree             16384  thrpt        20      459.972        4.810   ops/ms
    i.n.m.b.ByteBufAllocatorBenchmark.pooledHeapAllocAndFree             65536  thrpt        20      963.570        6.324   ops/ms
    i.n.m.b.ByteBufAllocatorBenchmark.unpooledDirectAllocAndFree         00000  thrpt        20     1159.940       23.065   ops/ms
    i.n.m.b.ByteBufAllocatorBenchmark.unpooledDirectAllocAndFree         00256  thrpt        20     1001.759       18.950   ops/ms
    i.n.m.b.ByteBufAllocatorBenchmark.unpooledDirectAllocAndFree         01024  thrpt        20      833.887       15.259   ops/ms
    i.n.m.b.ByteBufAllocatorBenchmark.unpooledDirectAllocAndFree         04096  thrpt        20      436.775       18.341   ops/ms
    i.n.m.b.ByteBufAllocatorBenchmark.unpooledDirectAllocAndFree         16384  thrpt        20      172.028        2.330   ops/ms
    i.n.m.b.ByteBufAllocatorBenchmark.unpooledDirectAllocAndFree         65536  thrpt        20       55.251        0.425   ops/ms
    i.n.m.b.ByteBufAllocatorBenchmark.unpooledHeapAllocAndFree           00000  thrpt        20    12269.097      107.365   ops/ms
    i.n.m.b.ByteBufAllocatorBenchmark.unpooledHeapAllocAndFree           00256  thrpt        20     6861.052      312.324   ops/ms
    i.n.m.b.ByteBufAllocatorBenchmark.unpooledHeapAllocAndFree           01024  thrpt        20     2830.463       16.016   ops/ms
    i.n.m.b.ByteBufAllocatorBenchmark.unpooledHeapAllocAndFree           04096  thrpt        20      788.576        3.252   ops/ms
    i.n.m.b.ByteBufAllocatorBenchmark.unpooledHeapAllocAndFree           16384  thrpt        20      139.584        2.537   ops/ms
    i.n.m.b.ByteBufAllocatorBenchmark.unpooledHeapAllocAndFree           65536  thrpt        20       19.342        0.076   ops/ms
    [GC 132638K->3521K(492544K), 0.0078330 secs]
    Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1,799.177 sec - in io.netty.microbench.buffer.ByteBufAllocatorBenchmark
    
    Results :
    
    Tests run: 1, Failures: 0, Errors: 0, Skipped: 0
    

    With this PR:

    # Run complete. Total time: 00:29:56
    
    Benchmark                                                           (size)   Mode   Samples        Score  Score error    Units
    i.n.m.b.ByteBufAllocatorBenchmark.defaultPooledDirectAllocAndFree    00000  thrpt        20    10558.393      128.586   ops/ms
    i.n.m.b.ByteBufAllocatorBenchmark.defaultPooledDirectAllocAndFree    00256  thrpt        20    10651.489      240.927   ops/ms
    i.n.m.b.ByteBufAllocatorBenchmark.defaultPooledDirectAllocAndFree    01024  thrpt        20    10786.637       73.084   ops/ms
    i.n.m.b.ByteBufAllocatorBenchmark.defaultPooledDirectAllocAndFree    04096  thrpt        20    10037.158      273.448   ops/ms
    i.n.m.b.ByteBufAllocatorBenchmark.defaultPooledDirectAllocAndFree    16384  thrpt        20     9986.690      387.501   ops/ms
    i.n.m.b.ByteBufAllocatorBenchmark.defaultPooledDirectAllocAndFree    65536  thrpt        20    10067.789      303.871   ops/ms
    i.n.m.b.ByteBufAllocatorBenchmark.defaultPooledHeapAllocAndFree      00000  thrpt        20    10710.797      164.222   ops/ms
    i.n.m.b.ByteBufAllocatorBenchmark.defaultPooledHeapAllocAndFree      00256  thrpt        20    10376.529      376.920   ops/ms
    i.n.m.b.ByteBufAllocatorBenchmark.defaultPooledHeapAllocAndFree      01024  thrpt        20    10676.458      105.060   ops/ms
    i.n.m.b.ByteBufAllocatorBenchmark.defaultPooledHeapAllocAndFree      04096  thrpt        20     9541.564      267.631   ops/ms
    i.n.m.b.ByteBufAllocatorBenchmark.defaultPooledHeapAllocAndFree      16384  thrpt        20     9169.813      512.857   ops/ms
    i.n.m.b.ByteBufAllocatorBenchmark.defaultPooledHeapAllocAndFree      65536  thrpt        20    10046.750      391.020   ops/ms
    i.n.m.b.ByteBufAllocatorBenchmark.pooledDirectAllocAndFree           00000  thrpt        20     7537.410       91.468   ops/ms
    i.n.m.b.ByteBufAllocatorBenchmark.pooledDirectAllocAndFree           00256  thrpt        20     6526.318      111.116   ops/ms
    i.n.m.b.ByteBufAllocatorBenchmark.pooledDirectAllocAndFree           01024  thrpt        20     6171.069       59.633   ops/ms
    i.n.m.b.ByteBufAllocatorBenchmark.pooledDirectAllocAndFree           04096  thrpt        20     6061.991      161.058   ops/ms
    i.n.m.b.ByteBufAllocatorBenchmark.pooledDirectAllocAndFree           16384  thrpt        20     2930.053       39.659   ops/ms
    i.n.m.b.ByteBufAllocatorBenchmark.pooledDirectAllocAndFree           65536  thrpt        20     2996.079       31.436   ops/ms
    i.n.m.b.ByteBufAllocatorBenchmark.pooledHeapAllocAndFree             00000  thrpt        20     8094.280       37.894   ops/ms
    i.n.m.b.ByteBufAllocatorBenchmark.pooledHeapAllocAndFree             00256  thrpt        20     6538.747      142.815   ops/ms
    i.n.m.b.ByteBufAllocatorBenchmark.pooledHeapAllocAndFree             01024  thrpt        20     6352.766       67.644   ops/ms
    i.n.m.b.ByteBufAllocatorBenchmark.pooledHeapAllocAndFree             04096  thrpt        20     6231.485       43.748   ops/ms
    i.n.m.b.ByteBufAllocatorBenchmark.pooledHeapAllocAndFree             16384  thrpt        20     2878.940       25.432   ops/ms
    i.n.m.b.ByteBufAllocatorBenchmark.pooledHeapAllocAndFree             65536  thrpt        20     3001.949       18.698   ops/ms
    i.n.m.b.ByteBufAllocatorBenchmark.unpooledDirectAllocAndFree         00000  thrpt        20     1133.578       30.007   ops/ms
    i.n.m.b.ByteBufAllocatorBenchmark.unpooledDirectAllocAndFree         00256  thrpt        20     1050.525       30.449   ops/ms
    i.n.m.b.ByteBufAllocatorBenchmark.unpooledDirectAllocAndFree         01024  thrpt        20      837.913       11.251   ops/ms
    i.n.m.b.ByteBufAllocatorBenchmark.unpooledDirectAllocAndFree         04096  thrpt        20      441.674        8.114   ops/ms
    i.n.m.b.ByteBufAllocatorBenchmark.unpooledDirectAllocAndFree         16384  thrpt        20      172.224        2.197   ops/ms
    i.n.m.b.ByteBufAllocatorBenchmark.unpooledDirectAllocAndFree         65536  thrpt        20       55.325        0.442   ops/ms
    i.n.m.b.ByteBufAllocatorBenchmark.unpooledHeapAllocAndFree           00000  thrpt        20    11218.176      203.149   ops/ms
    i.n.m.b.ByteBufAllocatorBenchmark.unpooledHeapAllocAndFree           00256  thrpt        20     6561.209      325.634   ops/ms
    i.n.m.b.ByteBufAllocatorBenchmark.unpooledHeapAllocAndFree           01024  thrpt        20     2495.711       30.599   ops/ms
    i.n.m.b.ByteBufAllocatorBenchmark.unpooledHeapAllocAndFree           04096  thrpt        20      725.909        8.473   ops/ms
    i.n.m.b.ByteBufAllocatorBenchmark.unpooledHeapAllocAndFree           16384  thrpt        20      129.173        2.655   ops/ms
    i.n.m.b.ByteBufAllocatorBenchmark.unpooledHeapAllocAndFree           65536  thrpt        20       18.793        0.484   ops/ms
    [GC 132622K->3473K(492544K), 0.0070480 secs]
    Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1,796.792 sec - in io.netty.microbench.buffer.ByteBufAllocatorBenchmark
    

    As you see the performance for allocations > pageSize are a lot faster.

    improvement 
    opened by normanmaurer 70
  • Improve predictability of writeUtf8/writeAscii performance

    Improve predictability of writeUtf8/writeAscii performance

    Motivation:

    writeUtf8 can suffer from inlining issues and/or megamorphic call-sites on the hot path due to ByteBuf hierarchy

    Modifications:

    Duplicate and specialize the code paths to reduce the need of polymorphic calls

    Result:

    Performance are more stable in user code

    improvement discussion 
    opened by franz1981 69
  • SSL Timeout exception when upgrading netty from 4.1.74.Final to 4.1.75.Final

    SSL Timeout exception when upgrading netty from 4.1.74.Final to 4.1.75.Final

    I am not sure if this is an issue or not, didn't figure it out yet, so I apologise in advance if is not :).

    Using finagle 22.12.0, I am trying to create an HTTP server as follows:

    val sslConfiguration = SslServerConfiguration(
      keyCredentials = serverKeyCredentials,
      clientAuth = ClientAuth.Wanted,
      trustCredentials = TrustCredentials.Insecure
    )
    
    val coreServer = Http.server
      .withTransport
      .tls(sslConfiguration)
      ...
    

    With netty 4.1.74.Final I have the following log messages:

    {"timestamp":"2023-01-04T10:26:34.029Z","thread":"finagle/netty4-2-1","level":"DEBUG","logger":"io.netty.handler.ssl.OpenSsl","rawMessage":"Failed to initialize netty-tcnative; OpenSslEngine will be unavailable. See https://netty.io/wiki/forked-tomcat-native.html for more information."} {"timestamp":"2023-01-04T10:04:45.983Z","thread":"finagle/netty4-2-1","level":"DEBUG","logger":"io.netty.handler.ssl.OpenSsl","rawMessage":"netty-tcnative using native library: BoringSSL"} {"timestamp":"2023-01-04T10:26:34.17Z","thread":"finagle/netty4-2-1","level":"DEBUG","logger":"io.netty.handler.ssl.JdkSslContext","rawMessage":"Default protocols (JDK): [TLSv1.2, TLSv1.1, TLSv1] "} {"timestamp":"2023-01-04T10:04:46.089Z","thread":"finagle/netty4-2-1","level":"DEBUG","logger":"io.netty.handler.ssl.OpenSsl","rawMessage":"KeyManagerFactory not supported."} {"timestamp":"2023-01-04T10:26:34.17Z","thread":"finagle/netty4-2-1","level":"DEBUG","logger":"io.netty.handler.ssl.JdkSslContext","rawMessage":"Default cipher suites (JDK): [TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384, TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256, TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256, TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384, TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA, TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA, TLS_RSA_WITH_AES_128_GCM_SHA256, TLS_RSA_WITH_AES_128_CBC_SHA, TLS_RSA_WITH_AES_256_CBC_SHA]"}

    With netty 4.1.75.Final I have the following log messages:

    {"timestamp":"2023-01-04T10:04:45.983Z","thread":"finagle/netty4-2-1","level":"DEBUG","logger":"io.netty.util.internal.NativeLibraryLoader","rawMessage":"Loaded library with name 'netty_tcnative_osx_x86_64'"} {"timestamp":"2023-01-04T10:26:34.028Z","thread":"finagle/netty4-2-1","level":"DEBUG","logger":"io.netty.handler.ssl.OpenSsl","rawMessage":"Initialize netty-tcnative using engine: 'default'"} {"timestamp":"2023-01-04T10:04:45.983Z","thread":"finagle/netty4-2-1","level":"DEBUG","logger":"io.netty.handler.ssl.OpenSsl","rawMessage":"Initialize netty-tcnative using engine: 'default'"} {"timestamp":"2023-01-04T10:26:34.029Z","thread":"finagle/netty4-2-1","level":"DEBUG","logger":"io.netty.handler.ssl.OpenSsl","rawMessage":"Failed to initialize netty-tcnative; OpenSslEngine will be unavailable. See https://netty.io/wiki/forked-tomcat-native.html for more information."} {"timestamp":"2023-01-04T10:04:45.983Z","thread":"finagle/netty4-2-1","level":"DEBUG","logger":"io.netty.handler.ssl.OpenSsl","rawMessage":"netty-tcnative using native library: BoringSSL"} {"timestamp":"2023-01-04T10:26:34.17Z","thread":"finagle/netty4-2-1","level":"DEBUG","logger":"io.netty.handler.ssl.JdkSslContext","rawMessage":"Default protocols (JDK): [TLSv1.2, TLSv1.1, TLSv1] "} {"timestamp":"2023-01-04T10:04:46.089Z","thread":"finagle/netty4-2-1","level":"DEBUG","logger":"io.netty.handler.ssl.OpenSsl","rawMessage":"KeyManagerFactory not supported."}

    I have the following java version: openjdk version "1.8.0_345" OS Version: Debian 12

    If I explicitly disable OpenSsl using "-Dio.netty.handler.ssl.noOpenSsl=true", everything works with netty 4.1.75+ as well. Can I carry on with disabling openssl, or this is not recommended?

    Any idea what changed between 4.1.74 and 4.1.75+ that could have caused this? As from version 4.1.75+, without noOpenSsl=true, SSL connections to an HTTP server created as described above returns timeout exception.

    Thanks.

    opened by alin-t 0
  • Bump up os-maven-plugin to 1.7.1

    Bump up os-maven-plugin to 1.7.1

    Motivation:

    os-maven-plugin 1.7.0 does not recognize LoongArch information, os-maven-plugin supports LoongArch since 1.7.1

    Modification:

    Update pom.xml and testsuite-shading/pom.xml

    Result:

    LoongArch cpu information can be correctly identified

    opened by Panxuefeng-loongson 1
  • Fix NPE caused by old bundle plugin version

    Fix NPE caused by old bundle plugin version

    Motivation:

    We used some very old bundle plugin version in our commons module. This caused a NPE when using a more recent JDK.

    Modifications:

    • Update the plugin version in general
    • Remove extra version declaration in common pom.xml

    Result:

    No more NPE during build

    opened by normanmaurer 0
  • Remove duplicated declaration of log4j2 core

    Remove duplicated declaration of log4j2 core

    Motivation:

    There were some warning related to duplicated declarations of log4j2 core

    Modifications:

    Remove duplicated declaration

    Result:

    No more warnings during build related to duplicated declaration

    opened by normanmaurer 0
  • FlowControlHandler is passing read events when auto-reading is turned off

    FlowControlHandler is passing read events when auto-reading is turned off

    Expected behavior

    Not passing read events onto the pipeline (calling ctx.read() which results in reading from the client socket) when the queue is empty and auto-reading is turned off for the channel. I would suggest that the expected behavior should look something like the following piece of code:

    if (dequeue(ctx, 1) == 0) {
    	shouldConsume = true;
    	ctx.read();
    } else if (ctx.channel().config().isAutoRead()) {
    	ctx.read();
    }
    

    Actual behavior

    Read events are passed onto the pipeline in every case regardless of the auto-reading configuration of the channel.

    Steps to reproduce

    1. Let's suppose we have a channel with a FlowControlHandler in its pipeline.
    2. Turn auto-reading off for the channel.
    3. Manually call ctx.read()
    4. For every ctx.read() call, the FlowControlHandler's read method will pass the read event onto the pipeline which will result in reading from the client socket.
    5. If the messages, which we read are not processed fast enough according to our custom logic, all of this can lead to a significant amount of data being stored in memory which leads to out-of-memory errors.

    Minimal yet complete reproducer code (or URL to code)

    n/a

    Netty version

    4.1.85.Final and above

    JVM version (e.g. java -version)

    any

    OS version (e.g. uname -a)

    any

    opened by ivanangelov 1
  • Failed to execute goal org.fusesource.hawtjni:maven-hawtjni-plugin:1.14:build (build-native-lib) on project netty-resolver-dns-native-macos: build failed: org.apache.maven.plugin.MojoExecutionException: ./configure failed with exit code: 1

    Failed to execute goal org.fusesource.hawtjni:maven-hawtjni-plugin:1.14:build (build-native-lib) on project netty-resolver-dns-native-macos: build failed: org.apache.maven.plugin.MojoExecutionException: ./configure failed with exit code: 1

    Expected behavior

    compile success

    Actual behavior

    Failed to execute goal org.fusesource.hawtjni:maven-hawtjni-plugin:1.14:build (build-native-lib) on project netty-resolver-dns-native-macos: build failed: org.apache.maven.plugin.MojoExecutionException: ./configure failed with exit code: 1

    (skip) checking whether the gcc linker (/Library/Developer/CommandLineTools/usr/bin/ld) supports shared libraries... yes [INFO] checking dynamic linker characteristics... darwin20.6.0 dyld [INFO] checking how to hardcode library paths into programs... immediate [INFO] checking whether stripping libraries is possible... yes [INFO] checking if libtool supports shared libraries... yes [INFO] checking whether to build shared libraries... yes [INFO] checking whether to build static libraries... no [INFO] configure: javac was on your path, checking to see if it's part of a JDK we can use... [INFO] checking if '/usr' is a JDK... no [INFO] configure: Taking a guess as to where your OS installs the JDK by default... [INFO] checking if '/System/Library/Frameworks/JavaVM.framework' is a JDK... no [INFO] configure: error: JDK not found. Please use the --with-jni-jdk option [INFO] rc: 1

    (skip...)

    Steps to reproduce

    1、run mvn clean -U install -DskipTests at netty parent path then get the compile error

    2、cd resolver-dns-native-macos and run mvn clean -U install -DskipTests -X get the same compile error

    Netty version

    4.1

    JVM version (e.g. java -version)

    java version "1.8.0_281" Java(TM) SE Runtime Environment (build 1.8.0_281-b09) Java HotSpot(TM) 64-Bit Server VM (build 25.281-b09, mixed mode)

    OS version (e.g. uname -a)

    Darwin Kernel Version 20.6.0: root:xnu-7195.141.2~5/RELEASE_X86_64 x86_64

    I'm trying...

    I config the JAVA_HOME and execute echo $JAVA_HOME can print the root path of java, but the log print with -X show

    (skip......) checking if '/usr' is a JDK... no [INFO] configure: Taking a guess as to where your OS installs the JDK by default... [INFO] checking if '/System/Library/Frameworks/JavaVM.framework' is a JDK... no [INFO] configure: error: JDK not found. Please use the --with-jni-jdk option (skip......)

    not use PATH's JAVA_HOME, how can i resolve this compile error?

    opened by superRainGit 4
Owner
The Netty Project
Opening the future of network programming since 2001
The Netty Project
Magician is an asynchronous non-blocking network protocol analysis package, supports TCP, UDP protocol, built-in Http, WebSocket decoder

An asynchronous non-blocking network protocol analysis package Project Description Magician is an asynchronous non-blocking network protocol analysis

贝克街的天才 103 Nov 30, 2022
Simple & Lightweight Netty packet library + event system

Minimalistic Netty-Packet library Create packets with ease Bind events to packets Example Packet: public class TestPacket extends Packet { privat

Pierre Maurice Schwang 17 Dec 7, 2022
Experimental Netty-based Java 16 application/web framework

Experimental Netty-based application/web framework. An example application can be seen here. Should I use this? Probably not! It's still incredibly ea

amy null 8 Feb 17, 2022
Microhttp - a fast, scalable, event-driven, self-contained Java web server

Microhttp is a fast, scalable, event-driven, self-contained Java web server that is small enough for a programmer to understand and reason about.

Elliot Barlas 450 Dec 23, 2022
Apache MINA is a network application framework which helps users

Apache MINA is a network application framework which helps users develop high performance and high scalability network applications easily

The Apache Software Foundation 846 Dec 20, 2022
LINE 4.1k Dec 31, 2022
IoT Platform, Device management, data collection, processing and visualization, multi protocol, rule engine, netty mqtt client

GIoT GIoT: GIoT是一个开源的IoT平台,支持设备管理、物模型,产品、设备管理、规则引擎、多种存储、多sink、多协议(http、mqtt、tcp,自定义协议)、多租户管理等等,提供插件化开发 Documentation Quick Start Module -> giot-starte

gerry 34 Sep 13, 2022
Nifty is an implementation of Thrift clients and servers on Netty

his project is archived and no longer maintained. At the time of archiving, open issues and pull requests were clo

Meta Archive 902 Sep 9, 2022
Asynchronous Http and WebSocket Client library for Java

Async Http Client Follow @AsyncHttpClient on Twitter. The AsyncHttpClient (AHC) library allows Java applications to easily execute HTTP requests and a

AsyncHttpClient 6k Dec 31, 2022
Mats3: Message-based Asynchronous Transactional Staged Stateless Services

Mats3: Message-based Asynchronous Transactional Staged Stateless Services

null 17 Dec 28, 2022
A High Performance Network ( TCP/IP ) Library

Chronicle-Network About A High Performance Network library Purpose This library is designed to be lower latency and support higher throughputs by empl

Chronicle Software : Open Source 231 Dec 31, 2022
Simulating shitty network connections so you can build better systems.

Comcast Testing distributed systems under hard failures like network partitions and instance termination is critical, but it's also important we test

Tyler Treat 9.8k Dec 30, 2022
Lunar Network SoupPvP gamemode replica

SoupPvP Lunar Network SoupPvP gamemode replica Disclaimer This is a work-in-progress, for that reason, a lot of features and essential parts of Lunar'

Elb1to 64 Nov 30, 2022
JNetcat : a tool to debug network issues or simulate servers

JNetcat A tool to easily debug or monitor traffic on TCP/UDP and simulate a server or client No need of telnet anymore to test for a remote connection

io-panic 3 Jul 26, 2022
A network core plugin for the Spigot which best Experience for Minecraft Servers.

tCore The core plugin for Spigot. (Supports 1.8.8<=) 大規模サーバー、ネットワーク等の中核となるプラグインです。プロトコルバージョン 1.8 未満での動作は確認していません。かなりの量のソースになりますが、様々な機能が実装されています。中身自体は過

null 6 Oct 13, 2022
Intra is an experimental tool that allows you to test new DNS-over-HTTPS services that encrypt domain name lookups and prevent manipulation by your network

Intra Intra is an experimental tool that allows you to test new DNS-over-HTTPS services that encrypt domain name lookups and prevent manipulation by y

Jigsaw 1.2k Jan 1, 2023
VelocityControl is a BungeeControl-fork plugin enabling ChatControl Red to connect with your Velocity network.

VelocityControl is a BungeeControl-fork plugin enabling ChatControl Red to connect with your Velocity network.

Matej Pacan 10 Oct 24, 2022
A Java event based WebSocket and HTTP server

Webbit - A Java event based WebSocket and HTTP server Getting it Prebuilt JARs are available from the central Maven repository or the Sonatype Maven r

null 808 Dec 23, 2022
A networking framework that evolves with your application

ServiceTalk ServiceTalk is a JVM network application framework with APIs tailored to specific protocols (e.g. HTTP/1.x, HTTP/2.x, etc…) and supports m

Apple 805 Dec 30, 2022