Asynchronous Http and WebSocket Client library for Java

Overview

Async Http Client Build Status Maven Central

Follow @AsyncHttpClient on Twitter.

The AsyncHttpClient (AHC) library allows Java applications to easily execute HTTP requests and asynchronously process HTTP responses. The library also supports the WebSocket Protocol.

It's built on top of Netty. It's currently compiled on Java 8 but runs on Java 9 too.

New Roadmap RFCs!

Well, not really RFCs, but as I am ramping up to release a new version, I would appreciate the comments from the community. Please add an issue and label it RFC and I'll take a look!

This Repository is Actively Maintained

@TomGranot is the current maintainer of this repository. You should feel free to reach out to him in an issue here or on Twitter for anything regarding this repository.

Installation

Binaries are deployed on Maven Central.

Import the AsyncHttpClient Bill of Materials (BOM) to add dependency management for AsyncHttpClient artifacts to your project:

<dependencyManagement>
    <dependencies>
        <dependency>
            <groupId>org.asynchttpclient</groupId>
            <artifactId>async-http-client-bom</artifactId>
            <version>LATEST_VERSION</version>
            <type>pom</type>
            <scope>import</scope>
        </dependency>
    </dependencies>
</dependencyManagement>

Add a dependency on the main AsyncHttpClient artifact:

<dependencies>
    <dependency>
    	<groupId>org.asynchttpclient</groupId>
    	<artifactId>async-http-client</artifactId>
    </dependency>
</dependencies>

The async-http-client-extras-* and other modules can also be added without having to specify the version for each dependency, because they are all managed via the BOM.

Version

AHC doesn't use SEMVER, and won't.

  • MAJOR = huge refactoring
  • MINOR = new features and minor API changes, upgrading should require 1 hour of work to adapt sources
  • FIX = no API change, just bug fixes, only those are source and binary compatible with same minor version

Check CHANGES.md for migration path between versions.

Basics

Feel free to check the Javadoc or the code for more information.

Dsl

Import the Dsl helpers to use convenient methods to bootstrap components:

import static org.asynchttpclient.Dsl.*;

Client

import static org.asynchttpclient.Dsl.*;

AsyncHttpClient asyncHttpClient = asyncHttpClient();

AsyncHttpClient instances must be closed (call the close method) once you're done with them, typically when shutting down your application. If you don't, you'll experience threads hanging and resource leaks.

AsyncHttpClient instances are intended to be global resources that share the same lifecycle as the application. Typically, AHC will usually underperform if you create a new client for each request, as it will create new threads and connection pools for each. It's possible to create shared resources (EventLoop and Timer) beforehand and pass them to multiple client instances in the config. You'll then be responsible for closing those shared resources.

Configuration

Finally, you can also configure the AsyncHttpClient instance via its AsyncHttpClientConfig object:

import static org.asynchttpclient.Dsl.*;

AsyncHttpClient c = asyncHttpClient(config().setProxyServer(proxyServer("127.0.0.1", 38080)));

HTTP

Sending Requests

Basics

AHC provides 2 APIs for defining requests: bound and unbound. AsyncHttpClient and Dsl` provide methods for standard HTTP methods (POST, PUT, etc) but you can also pass a custom one.

import org.asynchttpclient.*;

// bound
Future<Response> whenResponse = asyncHttpClient.prepareGet("http://www.example.com/").execute();

// unbound
Request request = get("http://www.example.com/").build();
Future<Response> whenResponse = asyncHttpClient.executeRequest(request);

Setting Request Body

Use the setBody method to add a body to the request.

This body can be of type:

  • java.io.File
  • byte[]
  • List<byte[]>
  • String
  • java.nio.ByteBuffer
  • java.io.InputStream
  • Publisher<io.netty.bufferByteBuf>
  • org.asynchttpclient.request.body.generator.BodyGenerator

BodyGenerator is a generic abstraction that let you create request bodies on the fly. Have a look at FeedableBodyGenerator if you're looking for a way to pass requests chunks on the fly.

Multipart

Use the addBodyPart method to add a multipart part to the request.

This part can be of type:

  • ByteArrayPart
  • FilePart
  • InputStreamPart
  • StringPart

Dealing with Responses

Blocking on the Future

execute methods return a java.util.concurrent.Future. You can simply block the calling thread to get the response.

Future<Response> whenResponse = asyncHttpClient.prepareGet("http://www.example.com/").execute();
Response response = whenResponse.get();

This is useful for debugging but you'll most likely hurt performance or create bugs when running such code on production. The point of using a non blocking client is to NOT BLOCK the calling thread!

Setting callbacks on the ListenableFuture

execute methods actually return a org.asynchttpclient.ListenableFuture similar to Guava's. You can configure listeners to be notified of the Future's completion.

ListenableFuture<Response> whenResponse = ???;
Runnable callback = () -> {
	try  {
		Response response = whenResponse.get();
		System.out.println(response);
	} catch (InterruptedException | ExecutionException e) {
		e.printStackTrace();
	}
};
java.util.concurrent.Executor executor = ???;
whenResponse.addListener(() -> ???, executor);

If the executor parameter is null, callback will be executed in the IO thread. You MUST NEVER PERFORM BLOCKING operations in there, typically sending another request and block on a future.

Using custom AsyncHandlers

execute methods can take an org.asynchttpclient.AsyncHandler to be notified on the different events, such as receiving the status, the headers and body chunks. When you don't specify one, AHC will use a org.asynchttpclient.AsyncCompletionHandler;

AsyncHandler methods can let you abort processing early (return AsyncHandler.State.ABORT) and can let you return a computation result from onCompleted that will be used as the Future's result. See AsyncCompletionHandler implementation as an example.

The below sample just capture the response status and skips processing the response body chunks.

Note that returning ABORT closes the underlying connection.

import static org.asynchttpclient.Dsl.*;
import org.asynchttpclient.*;
import io.netty.handler.codec.http.HttpHeaders;

Future<Integer> whenStatusCode = asyncHttpClient.prepareGet("http://www.example.com/")
.execute(new AsyncHandler<Integer>() {
	private Integer status;
	@Override
	public State onStatusReceived(HttpResponseStatus responseStatus) throws Exception {
		status = responseStatus.getStatusCode();
		return State.ABORT;
	}
	@Override
	public State onHeadersReceived(HttpHeaders headers) throws Exception {
		return State.ABORT;
	}
	@Override
	public State onBodyPartReceived(HttpResponseBodyPart bodyPart) throws Exception {
		return State.ABORT;
	}
	@Override
	public Integer onCompleted() throws Exception {
		return status;
	}
	@Override
	public void onThrowable(Throwable t) {
	}
});

Integer statusCode = whenStatusCode.get();

Using Continuations

ListenableFuture has a toCompletableFuture method that returns a CompletableFuture. Beware that canceling this CompletableFuture won't properly cancel the ongoing request. There's a very good chance we'll return a CompletionStage instead in the next release.

CompletableFuture<Response> whenResponse = asyncHttpClient
            .prepareGet("http://www.example.com/")
            .execute()
            .toCompletableFuture()
            .exceptionally(t -> { /* Something wrong happened... */  } )
            .thenApply(response -> { /*  Do something with the Response */ return resp; });
whenResponse.join(); // wait for completion

You may get the complete maven project for this simple demo from org.asynchttpclient.example

WebSocket

Async Http Client also supports WebSocket. You need to pass a WebSocketUpgradeHandler where you would register a WebSocketListener.

WebSocket websocket = c.prepareGet("ws://demos.kaazing.com/echo")
      .execute(new WebSocketUpgradeHandler.Builder().addWebSocketListener(
          new WebSocketListener() {

          @Override
          public void onOpen(WebSocket websocket) {
              websocket.sendTextFrame("...").sendTextFrame("...");
          }

          @Override
          public void onClose(WebSocket websocket) {
          }
          
    		  @Override
          public void onTextFrame(String payload, boolean finalFragment, int rsv) {
          	System.out.println(payload);
          }

          @Override
          public void onError(Throwable t) {
          }
      }).build()).get();

Reactive Streams

AsyncHttpClient has built-in support for reactive streams.

You can pass a request body as a Publisher<ByteBuf> or a ReactiveStreamsBodyGenerator.

You can also pass a StreamedAsyncHandler<T> whose onStream method will be notified with a Publisher<HttpResponseBodyPart>.

See tests in package org.asynchttpclient.reactivestreams for examples.

WebDAV

AsyncHttpClient has build in support for the WebDAV protocol. The API can be used the same way normal HTTP request are made:

Request mkcolRequest = new RequestBuilder("MKCOL").setUrl("http://host:port/folder1").build();
Response response = c.executeRequest(mkcolRequest).get();

or

Request propFindRequest = new RequestBuilder("PROPFIND").setUrl("http://host:port").build();
Response response = c.executeRequest(propFindRequest, new AsyncHandler() {
  // ...
}).get();

More

You can find more information on Jean-François Arcand's blog. Jean-François is the original author of this library. Code is sometimes not up-to-date but gives a pretty good idea of advanced features.

User Group

Keep up to date on the library development by joining the Asynchronous HTTP Client discussion group

Google Group

Contributing

Of course, Pull Requests are welcome.

Here are the few rules we'd like you to respect if you do so:

  • Only edit the code related to the suggested change, so DON'T automatically format the classes you've edited.
  • Use IntelliJ default formatting rules.
  • Regarding licensing:
    • You must be the original author of the code you suggest.
    • You must give the copyright to "the AsyncHttpClient Project"
Comments
  • Grizzly provider TimeoutException making async requests

    Grizzly provider TimeoutException making async requests

    When making async requests using the Grizzly provider (from AHC 2.0.0-SNAPSHOT), I get some TimeoutExceptions that should not occur. The server is serving these requests very rapidly, and the JVM isn't GCing very much. The requests serve in a fraction of a second, but the Grizzly provider says they timed out after 9 seconds. If I set the Grizzly provider's timeout to a higher number of seconds then it times out after that many seconds instead..

    Some stack trace examples:

    java.util.concurrent.TimeoutException: Timeout exceeded at org.asynchttpclient.providers.grizzly.GrizzlyAsyncHttpProvider.timeout(GrizzlyAsyncHttpProvider.java:485) at org.asynchttpclient.providers.grizzly.GrizzlyAsyncHttpProvider$3.onTimeout(GrizzlyAsyncHttpProvider.java:276) at org.glassfish.grizzly.utils.IdleTimeoutFilter$DefaultWorker.doWork(IdleTimeoutFilter.java:382) at org.glassfish.grizzly.utils.IdleTimeoutFilter$DefaultWorker.doWork(IdleTimeoutFilter.java:362) at org.glassfish.grizzly.utils.DelayedExecutor$DelayedRunnable.run(DelayedExecutor.java:158) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:722)


    another stack trace:

    java.util.concurrent.ExecutionException: java.util.concurrent.TimeoutException: Timeout exceeded at java.util.concurrent.FutureTask$Sync.innerGet(FutureTask.java:252) at java.util.concurrent.FutureTask.get(FutureTask.java:111) at org.asynchttpclient.providers.grizzly.GrizzlyResponseFuture.get(GrizzlyResponseFuture.java:165) at org.ebaysf.webclient.benchmark.NingAhcGrizzlyBenchmarkTest.asyncWarmup(NingAhcGrizzlyBenchmarkTest.java:105) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:601) at org.ebaysf.webclient.benchmark.AbstractBenchmarkTest.doBenchmark(AbstractBenchmarkTest.java:168) at org.ebaysf.webclient.benchmark.NingAhcGrizzlyBenchmarkTest.testAsyncLargeResponses(NingAhcGrizzlyBenchmarkTest.java:84) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:601) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44) at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55) at org.junit.rules.RunRules.evaluate(RunRules.java:20) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50) at org.junit.runners.ParentRunner$3.run(ParentRunner.java:238) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236) at org.junit.runners.ParentRunner.access$000(ParentRunner.java:53) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229) at org.junit.runners.ParentRunner.run(ParentRunner.java:309) at org.apache.maven.surefire.junit4.JUnit4TestSet.execute(JUnit4TestSet.java:45) at org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:119) at org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:101) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:601) at org.apache.maven.surefire.booter.ProviderFactory$ClassLoaderProxy.invoke(ProviderFactory.java:103) at com.sun.proxy.$Proxy0.invoke(Unknown Source) at org.apache.maven.surefire.booter.SurefireStarter.invokeProvider(SurefireStarter.java:150) at org.apache.maven.surefire.booter.SurefireStarter.runSuitesInProcess(SurefireStarter.java:91) at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:69) Caused by: java.util.concurrent.TimeoutException: Timeout exceeded at org.asynchttpclient.providers.grizzly.GrizzlyAsyncHttpProvider.timeout(GrizzlyAsyncHttpProvider.java:485) at org.asynchttpclient.providers.grizzly.GrizzlyAsyncHttpProvider$3.onTimeout(GrizzlyAsyncHttpProvider.java:276) at org.glassfish.grizzly.utils.IdleTimeoutFilter$DefaultWorker.doWork(IdleTimeoutFilter.java:382) at org.glassfish.grizzly.utils.IdleTimeoutFilter$DefaultWorker.doWork(IdleTimeoutFilter.java:362) at org.glassfish.grizzly.utils.DelayedExecutor$DelayedRunnable.run(DelayedExecutor.java:158) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:722)

    Here's what my asyncWarmup() method looks like:

    public void asyncWarmup(final String testUrl) {
        List<Future<Response>> futures = new ArrayList<Future<Response>>(warmupRequests);
        for (int i = 0; i < warmupRequests; i++) {
            try {
                futures.add(this.client.prepareGet(testUrl).execute());
            } catch (IOException e) {
                System.err.println("Failed to execute get at iteration #" + i);
            }
        }
    
        for (Future<Response> future : futures) {
            try {
                future.get();
            } catch (InterruptedException e) {
                e.printStackTrace();
            } catch (ExecutionException e) {
                e.printStackTrace();
            }
        }
    }
    

    And here's how the client is initialized:

    @Override
    protected void setup() {
        super.setup();
    
        GrizzlyAsyncHttpProviderConfig providerConfig = new GrizzlyAsyncHttpProviderConfig();
        AsyncHttpClientConfig config = new AsyncHttpClientConfig.Builder()
                .setAsyncHttpClientProviderConfig(providerConfig)
                .setMaximumConnectionsTotal(-1)
                .setMaximumConnectionsPerHost(4500)
                .setCompressionEnabled(false)
                .setAllowPoolingConnection(true /* keep-alive connection */)
                // .setAllowPoolingConnection(false /* no keep-alive connection */)
                .setConnectionTimeoutInMs(9000).setRequestTimeoutInMs(9000)
                .setIdleConnectionInPoolTimeoutInMs(3000).build();
    
        this.client = new AsyncHttpClient(new GrizzlyAsyncHttpProvider(config), config);
    
    }
    
    Grizzly 
    opened by jbrittain 55
  • Allow DefaultSslEngineFactory subclass customization of the SslContext

    Allow DefaultSslEngineFactory subclass customization of the SslContext

    See #1170 for context.

    If you have ideas for how to usefully test this, I'm happy to write them up, but it wasn't obvious to me how to usefully test this change.

    Enhancement 
    opened by marshallpierce 38
  • FeedableBodyGenerator - LEAK: ByteBuf.release() was not called before it's garbage-collected.

    FeedableBodyGenerator - LEAK: ByteBuf.release() was not called before it's garbage-collected.

    When I use custom FeedableBodyGenerator or SimpleFeedableBodyGenerator I see the next error message:

    [error] i.n.u.ResourceLeakDetector - LEAK: ByteBuf.release() was not called before it's garbage-collected. Enable advanced leak reporting to find out where the leak occurred. To enable advanced leak reporting, specify the JVM option '-Dio.netty.leakDetection.level=advanced' or call ResourceLeakDetector.setLevel() See http://netty.io/wiki/reference-counted-objects.html for more information.

    I guess the main problem is here: https://github.com/netty/netty/blob/4.0/transport/src/main/java/io/netty/channel/AbstractChannelHandlerContext.java#L990-L992 they just assign null to message, but should also call 'release' (ReferenceCountUtil.release(msg);) Am I right?

    Defect 
    opened by mielientiev 37
  • AsyncHttpClient does not close sockets under heavy load (1.9 only)

    AsyncHttpClient does not close sockets under heavy load (1.9 only)

    If you create 1000 requests in a very short time frame and use connection pool with AsyncHttpClient 1.9.21 and Netty 3.10.1, then some sockets will leak and stay open even past the idle socket reaper. This was initially filed as https://github.com/playframework/playframework/issues/5215, but can be replicated without Play WS.

    Created a reproducing test case here: https://github.com/wsargent/asynchttpclient-socket-leak

    if you have 50 requests, then they'll all be closed immediately. if you have 1000 requests, they'll stay open for a while. After roughly two minutes, AHC will close off all idle sockets, but up to 30 will never die and will always be established.

    To see the dangling sockets, run the id of the java process:

    sudo lsof -i | grep 31602
    

    You'll see

    java      31602       wsargent   89u  IPv6 0xe1b25a8062380645      0t0  TCP 192.168.1.106:58646->ec2-54-173-126-144.compute-1.amazonaws.com:https (ESTABLISHED)
    

    The client port number is your key into the application: if you search for "58646" in application.log, then you'll see that there's a connection associated with it:

    2015-11-02 20:41:38,496 [DEBUG] from org.jboss.netty.handler.logging.LoggingHandler in New I/O worker #1 - [id: 0x5650b318, /192.168.1.106:58646 => playframework.com/54.173.126.144:443] RECEIVED: BigEndianHeapChannelBuffer(ridx=0, widx=2357, cap=2357)
    

    You can see the lifecycle of a handle by using grep:

    grep "0x5650b318" application.log
    

    and what's interesting is that while most ids will have a CLOSE / CLOSED lifecycle associated with them:

    2015-11-02 20:41:45,878 [DEBUG] from org.jboss.netty.handler.logging.LoggingHandler in Hashed wheel timer #1 - [id: 0x34804fcc, /192.168.1.106:59122 => playframework.com/54.173.126.144:443] WRITE: BigEndianHeapChannelBuffer(ridx=0, widx=69, cap=69)
    2015-11-02 20:41:46,427 [DEBUG] from org.jboss.netty.handler.logging.LoggingHandler in New I/O worker #2 - [id: 0x34804fcc, /192.168.1.106:59122 => playframework.com/54.173.126.144:443] CLOSE
    2015-11-02 20:41:46,427 [DEBUG] from org.jboss.netty.handler.logging.LoggingHandler in New I/O worker #2 - [id: 0x34804fcc, /192.168.1.106:59122 :> playframework.com/54.173.126.144:443] DISCONNECTED
    2015-11-02 20:41:46,434 [DEBUG] from org.jboss.netty.handler.logging.LoggingHandler in New I/O worker #2 - [id: 0x34804fcc, /192.168.1.106:59122 :> playframework.com/54.173.126.144:443] UNBOUND
    2015-11-02 20:41:46,434 [DEBUG] from org.jboss.netty.handler.logging.LoggingHandler in New I/O worker #2 - [id: 0x34804fcc, /192.168.1.106:59122 :> playframework.com/54.173.126.144:443] CLOSED
    

    In the case of "0x5650b318", there's no CLOSE event happening here. In addition, there's a couple of lines that say it's a cached channel:

    2015-11-02 20:41:33,340 [DEBUG] from com.ning.http.client.providers.netty.request.NettyRequestSender in default-akka.actor.default-dispatcher-4 - Using cached Channel [id: 0x5650b318, /192.168.1.106:58646 => playframework.com/54.173.126.144:443]
    2015-11-02 20:41:33,340 [DEBUG] from com.ning.http.client.providers.netty.request.NettyRequestSender in default-akka.actor.default-dispatcher-4 - Using cached Channel [id: 0x5650b318, /192.168.1.106:58646 => playframework.com/54.173.126.144:443]
    

    So I think Netty is not closing cached channels even if they are idle, in some circumstances.

    Defect Netty Contributions Welcome! 
    opened by wsargent 37
  • IOException: Too many connections per host <#>

    IOException: Too many connections per host <#>

    After we set .setMaxConnectionsPerHost(64), our server seemed to happily work. We can pound it with traffic and see very few issues with connections since its so efficient at pooling connections that are in good condition.

    Lib: async-http-client 1.9.15 java version "1.7.0_67"

    After a while, however (about 24 hours), we start getting the above exception coming from the ChannelManager.

    Looking at the NettyResponseListener code, I noticed something odd.

    https://github.com/AsyncHttpClient/async-http-client/blob/b85d5b3505d9f6e80d278fef88876f6546e73079/providers/netty4/src/main/java/org/asynchttpclient/providers/netty4/request/NettyRequestSender.java

    In NettyRequestSender.sendRequestWithNewChannel(), there's this bit:

            boolean channelPreempted = false;
            String partition = null;
    
            try {            // Do not throw an exception when we need an extra connection for a
                // redirect.
                if (!reclaimCache) {
    
                    // only compute when maxConnectionPerHost is enabled
                    // FIXME clean up
                    if (config.getMaxConnectionsPerHost() > 0)
                        partition = future.getPartitionId();
    
                    channelManager.preemptChannel(partition);
                }
    
                if (asyncHandler instanceof AsyncHandlerExtensions)
                    AsyncHandlerExtensions.class.cast(asyncHandler).onOpenConnection();
    
                ChannelFuture channelFuture = connect(request, uri, proxy, useProxy, bootstrap, asyncHandler);
                channelFuture.addListener(new NettyConnectListener<T>(future, this, channelManager, channelPreempted, partition));
    
            } catch (Throwable t) {
                if (channelPreempted)
                    channelManager.abortChannelPreemption(partition);
    
                abort(null, future, t.getCause() == null ? t : t.getCause());
            }
    

    If you notice, channelPreempted never gets written to. Isn't channelPreempted = true missing from the block where the channel is preempted?

    Shouldn't it be:

            boolean channelPreempted = false;
            String partition = null;
    
            try {            // Do not throw an exception when we need an extra connection for a
                // redirect.
                if (!reclaimCache) {
    
                    // only compute when maxConnectionPerHost is enabled
                    // FIXME clean up
                    if (config.getMaxConnectionsPerHost() > 0)
                        partition = future.getPartitionId();
    
                    channelManager.preemptChannel(partition);
                    channelPreempted = true;
                }
    
                if (asyncHandler instanceof AsyncHandlerExtensions)
                    AsyncHandlerExtensions.class.cast(asyncHandler).onOpenConnection();
    
                ChannelFuture channelFuture = connect(request, uri, proxy, useProxy, bootstrap, asyncHandler);
                channelFuture.addListener(new NettyConnectListener<T>(future, this, channelManager, channelPreempted, partition));
    
            } catch (Throwable t) {
                if (channelPreempted)
                    channelManager.abortChannelPreemption(partition);
    
                abort(null, future, t.getCause() == null ? t : t.getCause());
            }
    

    The same class for netty3 has the correct code:

    https://github.com/AsyncHttpClient/async-http-client/blob/b85d5b3505d9f6e80d278fef88876f6546e73079/providers/netty3/src/main/java/org/asynchttpclient/providers/netty3/request/NettyRequestSender.java

    Waiting for user 
    opened by yoeduardoj 27
  • Grizzly provider fails to handle HEAD with Content-Length header

    Grizzly provider fails to handle HEAD with Content-Length header

    I am trying to use Grizzly provider (v1.7.6), and noticed a timeout for simple HEAD request. Since this is local test, with 15 second timeout, it looks like this is due to blocking. Same does not happen with Netty provider.

    My best guess to underlying problem is that Grizzly provider expects there to be content to read since Content-Length is returned. This would be incorrect assumption, since HTTP specification explicitly states that HEAD requests may contain length indicator, but there is never payload entity to return.

    Looking at Netty provider code, I can see explicit handling for this use case, where connection is closed and any content flushes (in case server did send something). I did not see similar handling in Grizzly provider, but since implementation code structure is very different it may reside somewhere else.

    opened by cowtowncoder 24
  • Spawning AHC 2.0 w/ Netty 4 instances very fast leads to fd/thread/memory starvation

    Spawning AHC 2.0 w/ Netty 4 instances very fast leads to fd/thread/memory starvation

    After updating the Play codebase to AHC 2.0-alpha9, I've started to experience issues with non terminating tests because of OutOfMemoryException. Until today, I wrongly assumed this was caused by the changes I made in AHC to support reactive streams, but the assumption turned out to be wrong. In fact, I can reproduce even by using AHC 2.0-alpha8, which doesn't include the reactive stream support.

    Here is a link to the truly long thread dump demonstrating that AHC 2.0-alpha8 is leaking threads https://gist.github.com/dotta/6e388962cf0d904e8170

    This issue is currently preventing https://github.com/playframework/playframework/pull/5082 to successfully build.

    Defect Netty 
    opened by dotta 23
  • Backpressure in AsyncHandler

    Backpressure in AsyncHandler

    AsyncHandler provides no mechanism to send back pressure on receiving the body parts.

    Imagine you have a server that stores large files on Amazon S3, and streams them out to clients, using async http client to connect to S3. Now imagine you have a very slow client, that connects and downloads a file. The slow client pushes back on the server via TCP. However, async http client will keep on calling onBodyPartReceived as fast as S3 provides it with data. The AsyncHandler implementation will have three choices:

    1. Block. Then it's blocking a worker thread, preventing other concurrent operations from happening. This is not an option.
    2. Buffer. Eventually this will cause an OutOfMemoryError. This is not an option.
    3. Drop. Then the client gets a corrupted file. This is not an option.

    AsyncHandler therefore needs a mechanism to propagate back pressure when handling body parts. One possibility here is to provide a method to say whether you are interested in receiving more data or not. This would correspond to a Channel.setReadable(true/false) in the netty provider, which will push back via TCP flow control. This could either be provided by injecting some sort of "channel" object into the AsyncHandler, or, since HttpResponseBodyPart already provides mechanisms for talking back to the channel (eg closeUnderlyingConnection()), it could be provided there.

    Enhancement Contributions Welcome! 
    opened by jroper 23
  • Upgrade to Netty 4.1

    Upgrade to Netty 4.1

    We've just release 2.0 that target Netty 4.0, so we won't rush into this.

    This issue is more of a mind map of what's to do:

    • drop ChannelId backport
    • drop DNS backport
    • investigate Netty's ChannelPool
    Enhancement 
    opened by slandelle 22
  • wss through proxy can't connect

    wss through proxy can't connect

    I am unable to established a wss connection using async-http-client using either the netty or the grizzly async handler, when connecting through a proxy server. In the netty case What appears to happen is that NettyAsyncHttpProvider issues a connect request to the proxy, however the next request, which I would expect to be the upgrade request is not correct.

    The logs look like

    DefaultHttpRequest(chunked: false)
    CONNECT 192.168.1.124:443 HTTP/1.0
    Upgrade: WebSocket
    Connection: Upgrade
    Origin: http://192.168.1.124:443
    Sec-WebSocket-Key: y3xU3BMOqCn6b3JBwKtEVA==
    Sec-WebSocket-Version: 13
    Host: 192.168.1.124
    Proxy-Connection: keep-alive
    Accept: /
    User-Agent: NING/1.0
    
    using Channel
    [id: 0x38827968]
    
    WebSocket Closed
    

    The important piece being that there is no GET between the CONNECT and the Upgrade:WebSocket. It's also not clear if it's reading the response from the proxy server before sending the https request. It appears that the websocket is closed when a HTTP/1.0 200 Connection established is received from the proxy server, which is intrepreted as an invalid response.

    opened by peoplesmeat 22
  • StreamedResponsePublisher cancelled() does not close the channel properly.

    StreamedResponsePublisher cancelled() does not close the channel properly.

    StreamedResponsePublisher does not cancel the channel properly.

    in https://github.com/AsyncHttpClient/async-http-client/blob/master/client/src/main/java/org/asynchttpclient/netty/handler/StreamedResponsePublisher.java

    We are using play-ws client which wraps AsyncHttpClient. When streams are cancelled (such as the downloader cancelling their download) we notice the following:

    1. this.logger.debug("Subscriber cancelled, ignoring the rest of the body"); is called - so the publisher does know that the stream is cancelled.

    However, it does not result in Connection.close.
    The callback is registered here (https://github.com/AsyncHttpClient/async-http-client/blob/90124a5caf414658d537799116c7d4f3d1ad45dd/client/src/main/java/org/asynchttpclient/netty/channel/ChannelManager.java#L407) But it is never called, never resulting in a channel closed exception.

    KeepAlive is false and we have the various connection pool timeouts at 15 seconds.

    Also it appears that the callback is set as an attribute on the channel - is that the correct approach or is it possible to have the callback overwritten by more incoming data or another thread?

    Any thoughts?

    Defect 
    opened by vsabella 21
  • java.io.IOException: Invalid Status code=200 text=OK

    java.io.IOException: Invalid Status code=200 text=OK

    Hello ! I'm attempting to setup a websocket with AsyncHttpClient similar to how its setup here - https://www.baeldung.com/async-http-client-websockets

    But I always get the same error and closing of the socket. java.io.IOException: Invalid Status code=200 text=OK Any ideas of what I might try ? I know the websocket server is working ... I tested it with a browser plugin and I can see the messages streaming image

    opened by supertick 2
  • [Question] Is it possible to reuse the http parser in the library?

    [Question] Is it possible to reuse the http parser in the library?

    Hi all, as my question suggests, I am currently looking for a way to get the corresponding http request (or response) when giving a byte array as input. Is this possible anyhow?

    Thanks!

    opened by DSimsek000 1
  • Request compression support in async-http-client

    Request compression support in async-http-client

    Hi,

    Does async-http-client supports sending compressed(compression does by algo mentioned in header 'content-encoding') request to server ? Can you share the code snippet how to add the support of request compression?

    opened by chandrav723 4
  • Request times out after being sent in AHC

    Request times out after being sent in AHC

    Hi everyone, I am using the org.asynchttpclient library version(2.12.3).I am using a service which makes a lot of downstream api calls to different services.Earlier I was using a single AHC object instance for all the downstream service calls with (maxConnectionPool=1000) and (maxConnectionsPerHost=500) with keep-alive=true. What happened was the requests were being sent from my client service to the target services but many of them timed out with the message "Request timeout to after ms". What I observed is the response from the services was within the timeout duration but for some reason the request timed out. As an experiment I increased the number of AHC instances and used a separate instance for each of the downstream service with the config remaining the same.I observed that the timeouts decreased significantly and the overall response time observed from the downstream calls improved.But still there are false timeouts which are seen. Also have seen this issue happening when using an open channel.When using a new channel this never seems to happen. I guess this is due to blocking of thread due to some reason that it is not able to process the new responses arriving in the channel.Is this some known bug or there may be some issue with my implementation,can someone throw some light on it?

    Defect 
    opened by saurabh782 4
  • Huge delay in request processing when using AsyncHttpClient

    Huge delay in request processing when using AsyncHttpClient

    I created a simple HTTP request using the AHC client. I am seeing it is taking around 680ms for an endpoint which normally takes 50ms (verified by using curl) . What is the reason of such large response time with AHC client. Do i need to set something before using AHC client. Not sure what I am missing. Note that I am only sending single request a time.

    import org.asynchttpclient.AsyncHttpClient;
    import org.asynchttpclient.Response;
    import static org.asynchttpclient.Dsl.asyncHttpClient;
    
    AsyncHttpClient client = asyncHttpClient();
    
    Long startTime = System.nanoTime();
    Future<Response> responseFuture = asyncHttpClient.prepareGet(healthCheckRequest.getUriString()).execute();
    Response res = responseFuture.get();
     Long endTime = HealthCheckUtils.getMillisSince(startTime);
     System.out.println("End time: " + endTime);
    
    opened by swasti7 0
Owner
AsyncHttpClient
AsyncHttpClient
A Java event based WebSocket and HTTP server

Webbit - A Java event based WebSocket and HTTP server Getting it Prebuilt JARs are available from the central Maven repository or the Sonatype Maven r

null 808 Dec 23, 2022
A barebones WebSocket client and server implementation written in 100% Java.

Java WebSockets This repository contains a barebones WebSocket server and client implementation written in 100% Java. The underlying classes are imple

Nathan Rajlich 9.5k Dec 30, 2022
An netty based asynchronous socket library for benchion java applications

Benchion Sockets Library An netty based asynchronous socket library for benchion java applications ?? Documents ?? Report Bug · Request Feature Conten

Fitchle 3 Dec 25, 2022
Square’s meticulous HTTP client for the JVM, Android, and GraalVM.

OkHttp See the project website for documentation and APIs. HTTP is the way modern applications network. It’s how we exchange data & media. Doing HTTP

Square 43.4k Jan 9, 2023
Netty project - an event-driven asynchronous network application framework

Netty Project Netty is an asynchronous event-driven network application framework for rapid development of maintainable high performance protocol serv

The Netty Project 30.5k Jan 3, 2023
Mats3: Message-based Asynchronous Transactional Staged Stateless Services

Mats3: Message-based Asynchronous Transactional Staged Stateless Services

null 17 Dec 28, 2022
A simple, fast integration of WebSocket spring-stater

websocket-spring-boot-starter readme 介绍 一个简单,快速,低配的spring-boot-starter,是对spring-boot-starter-websocket的扩展与二次封装,简化了springboot应用对websocket的操作 特点 简单,低配 支

Jack 4 Dec 24, 2021
WebSocket server with creatable/joinable channels.

bytesocks ?? bytesocks is a WebSocket server which allows clients to create "channels" and send messages in them. It's effectively an add-on for byteb

lucko 6 Nov 29, 2022
Distributed WebSocket Server

Keeper 分布式 WebSocket 服务器。 注意事项 IO 线程和业务线程分离:对于小业务,依旧放到 worker 线程中处理,对于需要和中间件交互的丢到业务线程池处理,避免 worker 阻塞。 WebSocket 握手阶段支持参数列表。 插件 本服务功能插件化。

岚 1 Dec 15, 2022
Standalone Play WS, an async HTTP client with fluent API

Play WS Standalone Play WS is a powerful HTTP Client library, originally developed by the Play team for use with Play Framework. It uses AsyncHttpClie

Play Framework 213 Dec 15, 2022
Telegram API Client and Telegram BOT API Library and Framework in Pure java.

Javagram Telegram API Client and Telegram Bot API library and framework in pure Java. Hello Telegram You can use Javagram for both Telegram API Client

Java For Everything 3 Oct 17, 2021
Full-featured Socket.IO Client Library for Java, which is compatible with Socket.IO v1.0 and later.

Socket.IO-client Java This is the Socket.IO Client Library for Java, which is simply ported from the JavaScript client. See also: Android chat demo en

Socket.IO 5k Jan 4, 2023
The Java gRPC implementation. HTTP/2 based RPC

gRPC-Java - An RPC library and framework gRPC-Java works with JDK 7. gRPC-Java clients are supported on Android API levels 16 and up (Jelly Bean and l

grpc 10.2k Jan 1, 2023
HTTP Server Model made in java

SimplyJServer HTTP Server Model made in java Features Fast : SimplyJServer is 40%-60% faster than Apache, due to it's simplicity. Simple to implement

Praudyogikee for Advanced Technology 2 Sep 25, 2021
TCP/UDP client/server library for Java, based on Kryo

KryoNet can be downloaded on the releases page. Please use the KryoNet discussion group for support. Overview KryoNet is a Java library that provides

Esoteric Software 1.7k Jan 2, 2023
A small java project consisting of Client and Server, that communicate via TCP/UDP protocols.

Ninja Battle A small java project consisting of Client and Server, that communicate via TCP/UDP protocols. Client The client is equipped with a menu i

Steliyan Dobrev 2 Jan 14, 2022
BAIN Social is a Fully Decentralized Server/client system that utilizes Concepts pioneered by I2P, ToR, and PGP to create a system which bypasses singular hosts for data while keeping that data secure.

SYNOPSIS ---------------------------------------------------------------------------------------------------- Welcome to B.A.I.N - Barren's A.I. Natio

Barren A.I. Wolfsbane 14 Jan 11, 2022
IoT Platform, Device management, data collection, processing and visualization, multi protocol, rule engine, netty mqtt client

GIoT GIoT: GIoT是一个开源的IoT平台,支持设备管理、物模型,产品、设备管理、规则引擎、多种存储、多sink、多协议(http、mqtt、tcp,自定义协议)、多租户管理等等,提供插件化开发 Documentation Quick Start Module -> giot-starte

gerry 34 Sep 13, 2022