Vert.x is a tool-kit for building reactive applications on the JVM

Overview

Build Status

Vert.x Core

This is the repository for Vert.x core.

Vert.x core contains fairly low-level functionality, including support for HTTP, TCP, file system access, and various other features. You can use this directly in your own applications, and it's used by many of the other components of Vert.x.

For more information on Vert.x and where Vert.x core fits into the big picture please see the website.

Building Vert.x artifacts

> mvn package

Running tests

Runs the tests

> mvn test

Vert.x supports native transport on BSD and Linux, to run the tests with native transport

> mvn test -PtestNativeTransport

Vert.x supports domain sockets on Linux exclusively, to run the tests with domain sockets

> mvn test -PtestDomainSockets

Vert.x has a few integrations tests that run a differently configured JVM (classpath, system properties, etc....) for ALPN, native and logging

> vertx verify -Dtest=FooTest # FooTest does not exists, its only purpose is to execute no tests during the test phase

Building documentation

> mvn package -Pdocs -DskipTests

Open target/docs/vertx-core/java/index.html with your browser

Comments
  • HAProxy protocol support

    HAProxy protocol support

    Signed-off-by: zenios [email protected]

    Notes:

    1. Hardcoded netty version (4.1.48.Final) for netty-codec-haproxy in order to avoid creating a pull request for vertx-dependencies

    2. Tests were copied from #1271. That PR can be closed if this one gets merged

    opened by zenios 88
  • HTTP/1.x server connection back-pressure management might reorder request body chunks

    HTTP/1.x server connection back-pressure management might reorder request body chunks

    As discussed here previously, starting with vert.x 3.5.0 I was noticing failing integration tests when uploading large (at least a few MB) files. The issue so far only appeared when running against mongo in a docker container or with travis ci. Downgrading to vert.x 3.4.0 seems to fix the issue. This issue does not appear on every build. I once examined the differences between the uploaded and the persisted file and it showed that about the first 1054400 bytes were correctly persisted, but subsequent bytes were corrupt. I could not find any issues with file download. Just the upload seems to encouter this issue.

    I have put a branch together which explicitly compares the md5 checksums of the uploaded file in addition to the actual bytes with vertx 3.5.1-SNAPSHOT travis available here - source

    -------------------------------------------------------
     T E S T S
    -------------------------------------------------------
    Running com.github.sth.vertx.mongo.streams.GridFSInOutStreamIT
    Cluster created with settings {hosts=[localhost:27017], mode=SINGLE, requiredClusterType=UNKNOWN, serverSelectionTimeout='30000 ms', maxWaitQueueSize=500} 
    Opened connection [connectionId{localValue:1, serverValue:1}] to localhost:27017 
    Monitor thread successfully connected to server with description ServerDescription{address=localhost:27017, type=STANDALONE, state=CONNECTED, ok=true, version=ServerVersion{versionList=[3, 4, 10]}, minWireVersion=0, maxWireVersion=5, maxDocumentSize=16777216, roundTripTimeNanos=4467832} 
    Opened connection [connectionId{localValue:2, serverValue:2}] to localhost:27017 
    uploaded file md5: e875acd2c89e7db444f04c54c5b01460
    java.lang.AssertionError: uploaded file md5 does not match md5 calculated on server. Not equals : 31C48A5A246C9352E0EB846FEFD0A498 != E875ACD2C89E7DB444F04C54C5B01460 
    Cluster created with settings {hosts=[localhost:27017], mode=SINGLE, requiredClusterType=UNKNOWN, serverSelectionTimeout='30000 ms', maxWaitQueueSize=500} 
    Opened connection [connectionId{localValue:3, serverValue:3}] to localhost:27017 
    Monitor thread successfully connected to server with description ServerDescription{address=localhost:27017, type=STANDALONE, state=CONNECTED, ok=true, version=ServerVersion{versionList=[3, 4, 10]}, minWireVersion=0, maxWireVersion=5, maxDocumentSize=16777216, roundTripTimeNanos=895998} 
    Opened connection [connectionId{localValue:4, serverValue:4}] to localhost:27017 
    Closed connection [connectionId{localValue:2, serverValue:2}] to localhost:27017 because the pool has been closed. 
    Cluster created with settings {hosts=[localhost:27017], mode=SINGLE, requiredClusterType=UNKNOWN, serverSelectionTimeout='30000 ms', maxWaitQueueSize=500} 
    Opened connection [connectionId{localValue:5, serverValue:5}] to localhost:27017 
    Monitor thread successfully connected to server with description ServerDescription{address=localhost:27017, type=STANDALONE, state=CONNECTED, ok=true, version=ServerVersion{versionList=[3, 4, 10]}, minWireVersion=0, maxWireVersion=5, maxDocumentSize=16777216, roundTripTimeNanos=2004399} 
    Opened connection [connectionId{localValue:6, serverValue:6}] to localhost:27017 
    uploaded file md5: b563f831734cf630117017f006470f8a
    Cluster created with settings {hosts=[localhost:27017], mode=SINGLE, requiredClusterType=UNKNOWN, serverSelectionTimeout='30000 ms', maxWaitQueueSize=500} 
    No server chosen by WritableServerSelector from cluster description ClusterDescription{type=UNKNOWN, connectionMode=SINGLE, serverDescriptions=[ServerDescription{address=localhost:27017, type=UNKNOWN, state=CONNECTING}]}. Waiting for 30000 ms before timing out 
    Closed connection [connectionId{localValue:6, serverValue:6}] to localhost:27017 because the pool has been closed. 
    Opened connection [connectionId{localValue:7, serverValue:7}] to localhost:27017 
    Monitor thread successfully connected to server with description ServerDescription{address=localhost:27017, type=STANDALONE, state=CONNECTED, ok=true, version=ServerVersion{versionList=[3, 4, 10]}, minWireVersion=0, maxWireVersion=5, maxDocumentSize=16777216, roundTripTimeNanos=651115} 
    Opened connection [connectionId{localValue:8, serverValue:8}] to localhost:27017 
    Tests run: 2, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 6.561 sec <<< FAILURE! - in com.github.sth.vertx.mongo.streams.GridFSInOutStreamIT
    testUploadAndDownloadLarge(com.github.sth.vertx.mongo.streams.GridFSInOutStreamIT)  Time elapsed: 4.324 sec  <<< FAILURE!
    java.lang.AssertionError: uploaded file md5 does not match md5 calculated on server. Not equals : 31C48A5A246C9352E0EB846FEFD0A498 != E875ACD2C89E7DB444F04C54C5B01460
    

    this is a build using vertx 3.4.0 which never failed so far

    opened by st-h 75
  • feat: add keepAliveTTL to evict connection even if active

    feat: add keepAliveTTL to evict connection even if active

    Signed-off-by: Srijan Gupta [email protected]

    Motivation:

    If you are making http request to a remote-service with keep-alive: true the connection will never recycle if the request rate is high. In this scenario if a deployment took place for the remote service with blue-green configuration. Our client will still hit requests to old stack. To solve this keeping a global TTL for active connections makes sense. Similar to node's: https://www.npmjs.com/package/agentkeepalive#new-agentoptions

    opened by srijan02420 55
  • Non keep-alive HTTP/1.x requests close the connection immediately with compressed responses

    Non keep-alive HTTP/1.x requests close the connection immediately with compressed responses

    Hi all,

    First of all, Thank you for the amazing work with vertx.

    I am testing the Version 3.5.0 with an nginx reverse proxy.

    If I turn off the keep-alive functionality, the HttpServerResponseImpl is not answering anything.

    You can test using the following HTTP requests:

    Not working

    GET / HTTP/1.0.
    Host: my-server.com
    X-Real-IP: 127.0.0.1.
    X-Forwarded-For: 127.0.0.1.
    Connection: close.
    Cache-Control: max-age=0.
    Upgrade-Insecure-Requests: 1.
    User-Agent: Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/61.0.3163.79 Safari/537.36.
    Accept: text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,image/apng,*/*;q=0.8.
    Accept-Encoding: gzip, deflate.
    Accept-Language: pt-BR,pt;q=0.8,en-US;q=0.6,en;q=0.4.
    .
    

    Working

    GET / HTTP/1.1.
    Host: my-server.com
    X-Real-IP: 127.0.0.1.
    X-Forwarded-For: 127.0.0.1.
    Cache-Control: max-age=0.
    Upgrade-Insecure-Requests: 1.
    User-Agent: Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/61.0.3163.79 Safari/537.36.
    Accept: text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,image/apng,*/*;q=0.8.
    Accept-Encoding: gzip, deflate.
    Accept-Language: pt-BR,pt;q=0.8,en-US;q=0.6,en;q=0.4.
    .
    

    This is because of the Connection: close

    After a debug, I found that if HttpServerResponseImpl line 442 keepAlive is false, it is working as expected

        if (!keepAlive) {
          closeConnAfterWrite();
          closed = true;
        }
    

    So I suspect that the closeConnAfterWrite is not working as expected.

    I would appreciate if you can take a look on it.

    opened by leobispo 54
  • Sni support

    Sni support

    motivation : SNI is an important features for virtual hosting

    changes: provides SNI support for HttpClient/HttpServer/NetClient/NetServer. On the server certificates are matched using the certificate CN or SAN DNS, the PemKeyCertOptions has been extended to support multiple entries for this matter. For HttpClient, the host header value is sent as SNI server name, for NetClient the connect method is overloaded to provide the server name in addition of the connect host.

    please review @pmlopes @alexlehm @cescoffier

    opened by vietj 53
  • On Windows, in some cases,   ClasspathPathResolver.resolve()  throws

    On Windows, in some cases, ClasspathPathResolver.resolve() throws "IllegalArgumentException: URI is not hierarchical"

    Vert.x version : 2.0.0-CR3-SNAPSHOT

    When some opaque uri was created by "new URI(url.toExternalForm())", Paths.get throws IllegalAurgumentException(). ( see ) A opaque uri for example, "file:src/main/resources/index.html". There may be any other solutions for that.

    stacktrace:

    org.vertx.java.core.VertxException: java.lang.IllegalArgumentException: URI is not hierarchical
            at org.vertx.java.core.file.impl.ClasspathPathResolver.resolve(ClasspathPathResolver.java:61)
            at org.vertx.java.core.file.impl.PathAdjuster.adjust(PathAdjuster.java:35)
            at org.vertx.java.core.file.impl.PathAdjuster.adjust(PathAdjuster.java:42)
            at com.wingnest.mathcloud.MathCloud.start(MathCloud.java:27)
            at org.vertx.java.platform.Verticle.start(Verticle.java:82)
            at org.vertx.java.platform.impl.DefaultPlatformManager$18.run(DefaultPlatformManager.java:1270)
            at org.vertx.java.core.impl.DefaultContext$3.run(DefaultContext.java:171)
            at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:353)
            at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:366)
            at io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:101)
            at java.lang.Thread.run(Thread.java:722)
    Caused by: java.lang.IllegalArgumentException: URI is not hierarchical
            at sun.nio.fs.WindowsUriSupport.fromUri(WindowsUriSupport.java:122)
            at sun.nio.fs.WindowsFileSystemProvider.getPath(WindowsFileSystemProvider.java:91)
            at java.nio.file.Paths.get(Paths.java:138)
            at org.vertx.java.core.file.impl.ClasspathPathResolver.resolve(ClasspathPathResolver.java:59)
            ... 10 more
    
    opened by sgougi 41
  • Implement High Fault Tolerance: Vertx Cluster is fault tolerant when several nodes are killed

    Implement High Fault Tolerance: Vertx Cluster is fault tolerant when several nodes are killed

    Hi all,

    We are big fans of VertX! We are working on setuping a cluster of at least 20 Vertx JVM in a single cluster. When starting playing killing nodes, more than one node in few seconds, we found that the vertx event bus cluster becomes unstable. After drilling down into the code, we found that the clusterMap of HAManager, and subs multimap of the ClusteredEventBus are often corrupted:

    • sometimes, references on nodes killed are still present in clusterMap and subs multimap
    • sometimes, references on nodes still running are missing in clusterMap and subs multimap

    So we suggest to make sure to clean the maps from each node and not only from an elligible node. And we also suggest that each node is sending every 10 seconds their key/value into the clusterMap and subs multimap.

    The main benefits are that when the nodes are killed, after few seconds, the map are correctly cleaned-up, and running nodes are populating again the maps. Even under load (we tested it), the cluster becomes available again and 100% operational. We tested up to 20 running nodes and 80% of nodes loss, while the system were under load. The main drawback is that each node is responsible for sending (every 10s) their key/value constantly in the clusterMap and the subs multimap.

    So we propose to enable this behavior when the HFT is enabled from VertxOptions (High Fault Tolerance), since it will add additional traffic on the cluster which is not always necessary depending the deployment. (by default set to false). We have also the capability to change the delay of sending data to maps (HFTInterval) which is 10s by default.

    Please comment, accept, deny or anything else about this pull request. We are strongly considering Vertx at the heart of our new infra for deploying micro-services, and the fault tolerance is only thing that we see as a showstopper.

    Same issue already raised by other users:

    • https://github.com/eclipse/vert.x/pull/1593
    • https://github.com/vert-x3/vertx-hazelcast/issues/13
    opened by polipodi 40
  • A single Verticle consumer is processing messages concurrently for a worker thread pool

    A single Verticle consumer is processing messages concurrently for a worker thread pool

    Version

    Which version did you encounter this bug? 4.0.2

    Context

    I'm using Vert.x Verticles to process messages sequentially per given address, up to 3.9.5 a consumer would only process messages in a sequential order but now they seem to be doing that concurrently.

    Do you have a reproducer?

    Yes, here is a simple unit test, it passes with 3.9.5 and fails with 4.0.2: https://gist.github.com/guidomedina/ff20d1531bf59e046dd5fd5599918052

    bug 
    opened by guidomedina 39
  • Hostname resolution does not handle answers set in additional section instead of answers section

    Hostname resolution does not handle answers set in additional section instead of answers section

    I'm trying to create websocket client, but DNS resolution fails. It doesn't fails consistently, sometimes it succeeds. Example

    package test;
    
    import io.vertx.core.AbstractVerticle;
    import io.vertx.core.http.HttpClientOptions;
    
    public class Test extends AbstractVerticle {
      @Override
      public void start() {
        var httpOpts = new HttpClientOptions().setSsl(true);
        var client = vertx.createHttpClient(httpOpts);
    
        client.webSocket(443, "echo.websocket.org", "/", res -> {
          if (res.failed()) {
            res.cause().printStackTrace();
            return;
          }
          System.out.println("SUCCESS");
          res.result().handler(System.out::println);
        });
      }
    }
    

    Exception

    io.netty.resolver.dns.DnsResolveContext$SearchDomainUnknownHostException: Search domain query failed. Original hostname: 'echo.websocket.org' failed to resolve 'echo.websocket.org.Home' after 3 queries 
    	at io.netty.resolver.dns.DnsResolveContext.finishResolve(DnsResolveContext.java:925)
    	at io.netty.resolver.dns.DnsResolveContext.tryToFinishResolve(DnsResolveContext.java:884)
    	at io.netty.resolver.dns.DnsResolveContext.query(DnsResolveContext.java:356)
    	at io.netty.resolver.dns.DnsResolveContext.onResponse(DnsResolveContext.java:543)
    	at io.netty.resolver.dns.DnsResolveContext.access$400(DnsResolveContext.java:64)
    	at io.netty.resolver.dns.DnsResolveContext$2.operationComplete(DnsResolveContext.java:400)
    	at io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:500)
    	at io.netty.util.concurrent.DefaultPromise.notifyListeners0(DefaultPromise.java:493)
    	at io.netty.util.concurrent.DefaultPromise.notifyListenersNow(DefaultPromise.java:472)
    	at io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:413)
    	at io.netty.util.concurrent.DefaultPromise.setValue0(DefaultPromise.java:538)
    	at io.netty.util.concurrent.DefaultPromise.setSuccess0(DefaultPromise.java:527)
    	at io.netty.util.concurrent.DefaultPromise.trySuccess(DefaultPromise.java:98)
    	at io.netty.resolver.dns.DnsQueryContext.setSuccess(DnsQueryContext.java:204)
    	at io.netty.resolver.dns.DnsQueryContext.finish(DnsQueryContext.java:196)
    	at io.netty.resolver.dns.DnsNameResolver$DnsResponseHandler.channelRead(DnsNameResolver.java:1296)
    	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:374)
    	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:360)
    	at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:352)
    	at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:102)
    	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:374)
    	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:360)
    	at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:352)
    	at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1421)
    	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:374)
    	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:360)
    	at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:930)
    	at io.netty.channel.nio.AbstractNioMessageChannel$NioMessageUnsafe.read(AbstractNioMessageChannel.java:93)
    	at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:697)
    	at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:632)
    	at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:549)
    	at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:511)
    	at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:918)
    	at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
    	at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
    	at java.base/java.lang.Thread.run(Thread.java:834)
    
    

    Dig output

    ; <<>> DiG 9.14.7 <<>> echo.websocket.org
    ;; global options: +cmd
    ;; Got answer:
    ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 26605
    ;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 0
    
    ;; QUESTION SECTION:
    ;echo.websocket.org.		IN	A
    
    ;; ANSWER SECTION:
    echo.websocket.org.	60	IN	A	174.129.224.73
    
    ;; Query time: 45 msec
    ;; SERVER: 192.168.1.1#53(192.168.1.1)
    ;; WHEN: Thu Oct 17 12:26:26 CEST 2019
    ;; MSG SIZE  rcvd: 52
    

    I'm using vertx 3.8.2 on OpenJDK 11 on Arch Linux.

    invalid 
    opened by wooque 38
  • vertx2.0-CR2 can not recognize upload file data from phonegap

    vertx2.0-CR2 can not recognize upload file data from phonegap

    groovy code, simple and standard: upload from browser like chrome, everything is ok! but if upload from a phonegap app, nothing happened! so I think the reason is that uploadHandler can not recognize upload file data from phonegap( data encode a little differently ).

    req.uploadHandler { upload -> upload.exceptionHandler { cause -> req.response.end(">>>>>>>>>>Upload failed"); }

          upload.endHandler {
            println ">>>>>>>>>upload ok!"
            req.response.end("Upload successful!");
          }
    
          upload.streamToFileSystem(upload.filename);   
    }
    
    opened by zeneye 35
  • Http client hangs if connection pooling is used (default)

    Http client hangs if connection pooling is used (default)

    I'm writing non-blocking HTTP proxy server and faced with the issue: after several benchmark iterations my server stop handling requests. As I see pooled connections are in CLOSE_WAIT state (according to netstat) which means that remote server has closed them, but vert.x http client still keeps them for sending data. I suppose when new request arrives vert.x acquires one of them to pass the request and then hangs.

    My environment: Vert.x 3.1.0 JDK 1.8 OS X 10.9.5

    Code sample (Kotlin):

    val clientOptions =
                    HttpClientOptions()
                            .setDefaultHost("$BUCKET.s3.amazonaws.com")
                            .setMaxPoolSize(1000)
    
            val client = vertx.createHttpClient(clientOptions)
    
            val serverOptions =
                    HttpServerOptions()
                    .setAcceptBacklog(1000)
    
            vertx.createHttpServer(serverOptions)
                    .requestHandler { request ->
                        val path = request.path()
    
                        when (request.method()) {                        
                            HttpMethod.GET -> {
                                client.get(path) { s3Response ->
                                    if (s3Response.statusCode() != 200)
                                        request.response().setStatusCode(s3Response.statusCode()).end()
                                    else {
                                        val response =
                                                request.response()
                                                        .putHeader(LENGTH_HEADER, s3Response.getHeader(LENGTH_HEADER))
    
                                        val pump = Pump.pump(s3Response, response)
    
                                        s3Response.endHandler { response.end() }
    
                                        pump.start()
                                    }
                                }
                                .putHeader("Date", "...") // put required headers
                                .end()
                            }
                            else ->
                                request.response().setStatusCode(400).end("Unsupported HTTP verb")
                        }
                    }
                    .listen(8080) { result ->
                        if (result.succeeded())
                            future.complete()
                        else
                            future.fail(result.cause())
                    }
    
    ...
    
    @JvmStatic fun main(args: Array<String>) {
                Launcher.main(arrayOf("run", MyVerticle::class.java.name))
    }
    

    The issue looks similar to https://github.com/eclipse/vert.x/issues/1218 but differs in closing connection trigger - my case is a remote server closes connection explicitly. I suppose that http client should close it in order and remove from pool.

    opened by azhuchkov 34
  • HttpServer sslHandshakeTimeout not work when enable sni

    HttpServer sslHandshakeTimeout not work when enable sni

    Version

    4.3.5

    Steps to reproduce

    HttpServerOptions httpsOptions = new HttpServerOptions()
                    .setSsl(true)
                    .setSni(true)
                    .setKeyCertOptions(certOptions)
                    .setTrustOptions(trustOptions)
                    .setSslHandshakeTimeout(10_000)
                    .setSslHandshakeTimeoutUnit(TimeUnit.MILLISECONDS);
    
    Vertx
      .vertx()
      .createHttpServer(httpsOptions)
      .requestHandler(xxx)
      .listen();
    
    bug 
    opened by coding4m 0
  • Openssl no longer has much better performance

    Openssl no longer has much better performance

    https: //www.oracle.com/java/technologies/javase/8-whats-new.html "Hardware intrinsics were added to use Advanced Encryption Standard (AES)." https: //openjdk.org/jeps/164 "JEP 164: Leverage CPU Instructions for AES Cryptography" https: //docs.oracle.com/javase/9/whatsnew/#d1373e94 "Improves performance ranging from 34x to 150x for AES/GCM/NoPadding" https: //openjdk.org/jeps/246 "JEP 246: Leverage CPU Instructions for GHASH and RSA"

    opened by julianladisch 1
  • "java.lang.IllegalStateException: Request has already been read" when call pump.start for HttpServerRequest

    `package vertx;

    import io.vertx.core.Future; import io.vertx.core.Promise; import io.vertx.core.Vertx; import io.vertx.core.http.*; import io.vertx.core.streams.Pump; import io.vertx.ext.web.Router; import io.vertx.ext.web.handler.LoggerFormat; import io.vertx.ext.web.handler.LoggerHandler; import lombok.extern.slf4j.Slf4j;

    import java.net.MalformedURLException;

    @Slf4j public class VertxHttpClient {

    private static Vertx vertx = Vertx.vertx();
    
    public static Future<HttpServer> createServer(Router router) {
        Promise<HttpServer> promise = Promise.promise();
        HttpServerOptions options = new HttpServerOptions().setLogActivity(true);
        vertx.createHttpServer(options).requestHandler(router).listen(8888, "0.0.0.0", promise);
        log.info("createServer!");
        return promise.future();
    }
    
    public static void main(String[] args) throws MalformedURLException {
        HttpClientOptions options = new HttpClientOptions().setLogActivity(true);
        HttpClient httpClient = vertx.createHttpClient(options);
        Router router = Router.router(vertx);
        router.route().handler(LoggerHandler.create(LoggerFormat.SHORT));
        router.get("/indicators").handler(ctx -> {
            HttpServerRequest rctRequest = ctx.request();
            HttpServerResponse rctResponse = ctx.response();
    
            RequestOptions requestOptions = new RequestOptions();
            requestOptions.setMethod(HttpMethod.GET).setAbsoluteURI("http://10.157.66.69:3030/indicators");
            // rctRequest.pause();
            httpClient.request(requestOptions).onSuccess(request -> {
                request.response().onSuccess(r -> {
                    r.body().onSuccess(b -> rctResponse.end(b));
                }).onFailure(err -> {
                    log.info("fail1: " + err.toString());
                    rctResponse.setStatusCode(500);
                    rctResponse.end();
                });
    
                // rctRequest.resume();
                request.setChunked(true);
                Pump reqPump = Pump.pump(rctRequest, request);
                reqPump.start();
    
                rctRequest.exceptionHandler(e -> {
                    log.error("exception:" + e);
                    reqPump.stop();
                    request.end();
                });
    
                rctRequest.endHandler(end -> {
                    log.error("rctRequest ended1!");
                    request.end();
                });
            }).onFailure(err -> {
                // rctRequest.resume();
                log.info("fail2: " + err.toString());
            });
        });
    
        Future<HttpServer> httpServerFuture = createServer(router);
    
        httpServerFuture.onComplete(r -> {
            log.info("http start successful!");
        }).onFailure(err -> {
            log.info("http start failure!");
        });
    }
    

    } `

    =============

    previously, Vertx3 doesn't have the issue, but upgrade to vertx4, the issue happens.

    java.lang.IllegalStateException: Request has already been read at io.vertx.core.http.impl.Http1xServerRequest.checkEnded(Http1xServerRequest.java:655) ~[vertx-core-4.1.8.jar:4.1.8] at io.vertx.core.http.impl.Http1xServerRequest.handler(Http1xServerRequest.java:293) ~[vertx-core-4.1.8.jar:4.1.8] at io.vertx.ext.web.impl.HttpServerRequestWrapper.handler(HttpServerRequestWrapper.java:103) ~[vertx-web-4.1.8.jar:4.1.8] at io.vertx.ext.web.impl.HttpServerRequestWrapper.handler(HttpServerRequestWrapper.java:22) ~[vertx-web-4.1.8.jar:4.1.8] at io.vertx.core.streams.impl.PumpImpl.start(PumpImpl.java:86) ~[vertx-core-4.1.8.jar:4.1.8] at io.vertx.core.streams.impl.PumpImpl.start(PumpImpl.java:39) ~[vertx-core-4.1.8.jar:4.1.8] at vertx.VertxHttpClient.lambda$null$5(VertxHttpClient.java:52) ~[classes/:?] at io.vertx.core.impl.future.FutureImpl$1.onSuccess(FutureImpl.java:91) ~[vertx-core-4.1.8.jar:4.1.8]

    if add pause and resume, it could work, but the performance is worse than vertx3 without pause and resume. So what is the problem? or is there better way to implement the function?

    bug 
    opened by shuitai 0
  • Metric - Http1xServerConnection request is null

    Metric - Http1xServerConnection request is null

    Below exception occurred for enabled metric:

    java.lang.NullPointerException: Cannot invoke "io.vertx.core.http.impl.Http1xServerRequest.metric()" because "request" is null
        at io.vertx.core.http.impl.Http1xServerConnection.reportResponseComplete(Http1xServerConnection.java:253) 
        at io.vertx.core.http.impl.Http1xServerConnection.responseComplete(Http1xServerConnection.java:198) 
        at io.vertx.core.http.impl.Http1xServerResponse.end(Http1xServerResponse.java:415) 
        at io.vertx.core.http.impl.Http1xServerResponse.end(Http1xServerResponse.java:388) 
        at io.vertx.core.http.impl.Http1xServerResponse.end(Http1xServerResponse.java:367) 
        at io.vertx.ext.web.handler.impl.ErrorHandlerImpl.sendError(ErrorHandlerImpl.java:180) 
    

    Occurred on vertx version: 4.3.4 and 4.3.5

    bug 
    opened by glassfox 6
Owner
Eclipse Vert.x
Eclipse Vert.x
A reactive Java framework for building fault-tolerant distributed systems

Atomix Website | Javadoc | Slack | Google Group A reactive Java framework for building fault-tolerant distributed systems Please see the website for f

Atomix 2.3k Nov 28, 2022
Reactive Microservices for the JVM

Lagom - The Reactive Microservices Framework Lagom is a Swedish word meaning just right, sufficient. Microservices are about creating services that ar

Lagom Framework 2.6k Dec 7, 2022
Simple and lightweight sip server to create voice robots, based on vert.x

Overview Lightweight SIP application built on vert.x. It's intended to be used as addon for full-featured PBX to implement programmable voice scenario

Ivoice Technology 7 May 15, 2022
Build highly concurrent, distributed, and resilient message-driven applications on the JVM

Akka We believe that writing correct concurrent & distributed, resilient and elastic applications is too hard. Most of the time it's because we are us

Akka Project 12.5k Nov 28, 2022
Orbit - Virtual actor framework for building distributed systems

Full Documentation See the documentation website for full documentation, examples and other information. Orbit 1 Looking for Orbit 1? Visit the orbit1

Orbit 1.7k Nov 24, 2022
Fault tolerance and resilience patterns for the JVM

Failsafe Failsafe is a lightweight, zero-dependency library for handling failures in Java 8+, with a concise API for handling everyday use cases and t

Jonathan Halterman 3.9k Dec 3, 2022
Fibers, Channels and Actors for the JVM

Quasar Fibers, Channels and Actors for the JVM Getting started Add the following Maven/Gradle dependencies: Feature Artifact Core (required) co.parall

Parallel Universe 4.5k Dec 5, 2022
Apache Mesos is a cluster manager that provides efficient resource isolation and sharing across distributed applications, or frameworks

Apache Mesos is a cluster manager that provides efficient resource isolation and sharing across distributed applications, or frameworks. It can run Hadoop, Jenkins, Spark, Aurora, and other frameworks on a dynamically shared pool of nodes.

The Apache Software Foundation 5k Nov 29, 2022
APM, (Application Performance Management) tool for large-scale distributed systems.

Visit our official web site for more information and Latest updates on Pinpoint. Latest Release (2020/01/21) We're happy to announce the release of Pi

null 12.5k Dec 6, 2022
HUAWEI 3D Modeling Kit project contains a sample app. Guided by this demo, you will be able to implement full 3D Modeling Kit capabilities, including 3D object reconstruction and material generation.

HUAWEI 3D Modeling Kit Sample English | 中文 Introduction This project includes apps developed based on HUAWEI 3D Modeling Kit. The project directory is

HMS 55 Oct 28, 2022
A reactive dataflow engine, a data stream processing framework using Vert.x

?? NeonBee Core NeonBee is an open source reactive dataflow engine, a data stream processing framework using Vert.x. Description NeonBee abstracts mos

SAP 29 Nov 22, 2022
Reactive Streams Utilities - Future standard utilities library for Reactive Streams.

Reactive Streams Utilities This is an exploration of what a utilities library for Reactive Streams in the JDK might look like. Glossary: A short gloss

Lightbend 61 May 27, 2021
SpringBoot show case application for reactive-pulsar library (Reactive Streams adapter for Apache Pulsar Java Client)

Reactive Pulsar Client show case application Prerequisites Cloning reactive-pulsar Running this application requires cloning https://github.com/lhotar

Lari Hotari 9 Nov 10, 2022
A reactive Java framework for building fault-tolerant distributed systems

Atomix Website | Javadoc | Slack | Google Group A reactive Java framework for building fault-tolerant distributed systems Please see the website for f

Atomix 2.3k Nov 28, 2022
source code of the live coding demo for "Building resilient and scalable API backends with Apache Pulsar and Spring Reactive" talk held at [email protected] 2021

reactive-iot-backend The is the source code of the live coding demo for "Building resilient and scalable API backends with Apache Pulsar and Spring Re

Lari Hotari 4 Jan 13, 2022
A server-state reactive Java web framework for building real-time user interfaces and UI components.

RSP About Maven Code examples HTTP requests routing HTML markup Java DSL Page state model Single-page application Navigation bar URL path UI Component

Vadim Vashkevich 33 Jul 13, 2022
A joint research effort for building highly optimized Reactive-Streams compliant operators.

reactive-streams-commons A joint research effort for building highly optimized Reactive-Streams compliant operators. Current implementors include RxJa

Reactor 351 Oct 27, 2022