ScaleCube Services is a high throughput, low latency reactive microservices library built to scale. it features: API-Gateways, service-discovery, service-load-balancing, the architecture supports plug-and-play service communication modules and features. built to provide performance and low-latency real-time stream-processing. its open and designed to accommodate changes. (no sidecar in a form of broker or any kind)

Overview

scalecube-services

Codacy Badge Maven Central SourceSpy Dashboard

MICROSERVICES 2.0

An open-source project that is focused on streamlining reactive-programming of Microservices Reactive-systems that scale, built by developers for developers.

ScaleCube Services provides a low latency Reactive Microservices library for peer-to-peer service registry and discovery based on gossip protocol, without single point-of-failure or bottlenecks.

Scalecube more gracefully address the cross cutting concernes of distributed microservices architecture.

ScaleCube Services Features:
  • Provision and interconnect microservices peers in a cluster
  • Fully Distributed with No single-point-of-failure or single-point-of-bottleneck
  • Fast - Low latency and high throughput
  • Scaleable over- cores, jvms, clusters, regions.
  • Built-in Service Discovery and service routing
  • Zero configuration, automatic peer-to-peer service discovery using gossip
  • Simple non-blocking, asynchronous programming model
  • Reactive Streams support.
    • Fire And Forget - Send and not wait for a reply
    • Request Response - Send single request and expect single reply
    • Request Stream - Send single request and expect stream of responses.
    • Request bidirectional - send stream of requests and expect stream of responses.
  • Built-in failure detection, fault tolerance, and elasticity
  • Routing and balancing strategies for both stateless and stateful services
  • Embeddable into existing applications
  • Natural Circuit-Breaker via scalecube-cluster discovery and failure detector.
  • Support Service instance tagging.
  • Modular, flexible deployment models and topology
  • pluggable api-gateway providers (http / websocket / rsocket)
  • pluggable service transports (tcp / aeron / rsocket)
  • pluggable encoders (json, SBE, Google protocol buffers)

User Guide:

Basic Usage:

The example provisions 2 cluster nodes and making a remote interaction.

  1. seed is a member node and provision no services of its own.
  2. then microservices variable is a member that joins seed member and provision GreetingService instance.
  3. finally from seed node - create a proxy by the GreetingService api and send a greeting request.
    //1. ScaleCube Node node with no members
    Microservices seed = Microservices.builder().startAwait();

    //2. Construct a ScaleCube node which joins the cluster hosting the Greeting Service
    Microservices microservices =
        Microservices.builder()
            .discovery(
                self ->
                    new ScalecubeServiceDiscovery(self)
                        .options(opts -> opts.seedMembers(toAddress(seed.discovery().address()))))
            .transport(ServiceTransports::rsocketServiceTransport)
            .services(new GreetingServiceImpl())
            .startAwait();

    //3. Create service proxy
    GreetingsService service = seed.call().api(GreetingsService.class);

    // Execute the services and subscribe to service events
    service.sayHello("joe").subscribe(consumer -> {
      System.out.println(consumer.message());
    });

Basic Service Example:

  • RequestOne: Send single request and expect single reply
  • RequestStream: Send single request and expect stream of responses.
  • RequestBidirectional: send stream of requests and expect stream of responses.

A service is nothing but an interface declaring what methods we wish to provision at our cluster.

@Service
public interface ExampleService {

  @ServiceMethod
  Mono<String> sayHello(String request);

  @ServiceMethod
  Flux<MyResponse> helloStream();

  @ServiceMethod
  Flux<MyResponse> helloBidirectional(Flux<MyRequest> requests);
}

API-Gateway:

Available api-gateways are rsocket, http and websocket

Basic API-Gateway example:

    Microservices.builder()
        .discovery(options -> options.seeds(seed.discovery().address()))
        .services(...) // OPTIONAL: services (if any) as part of this node.

        // configure list of gateways plugins exposing the apis
        .gateway(options -> new WebsocketGateway(options.id("ws").port(8080)))
        .gateway(options -> new HttpGateway(options.id("http").port(7070)))
        .gateway(options -> new RSocketGateway(options.id("rsws").port(9090)))

        .startAwait();

        // HINT: you can try connect using the api sandbox to these ports to try the api.
        // http://scalecube.io/api-sandbox/app/index.html

Maven

With scalecube-services you may plug-and-play alternative providers for Transport,Codecs and discovery. Scalecube is using ServiceLoader to load providers from class path,

You can think about scalecube as slf4j for microservices - Currently supported SPIs:

Transport providers:

  • scalecube-services-transport-rsocket: using rsocket to communicate with remote services.

Message codec providers:

Service discovery providers:

Binaries and dependency information for Maven can be found at http://search.maven.org.

https://mvnrepository.com/artifact/io.scalecube

To add a dependency on ScaleCube Services using Maven, use the following:

Maven Central

 <properties>
   <scalecube.version>2.x.x</scalecube.version>
 </properties>

 <!-- -------------------------------------------
   scalecube core and api:
 ------------------------------------------- -->

 <!-- scalecube apis   -->
 <dependency>
  <groupId>io.scalecube</groupId>
  <artifactId>scalecube-services-api</artifactId>
  <version>${scalecube.version}</version>
 </dependency>

 <!-- scalecube services module   -->
 <dependency>
  <groupId>io.scalecube</groupId>
  <artifactId>scalecube-services</artifactId>
  <version>${scalecube.version}</version>
 </dependency>


 <!--

     Plugins / SPIs: bellow a list of providers you may choose from. to constract your own configuration:
     you are welcome to build/contribute your own plugins please consider the existing ones as example.

  -->

 <!-- scalecube transport providers:  -->
 <dependency>
  <groupId>io.scalecube</groupId>
  <artifactId>scalecube-services-transport-rsocket</artifactId>
  <version>${scalecube.version}</version>
 </dependency>

Sponsored by OM2

Comments
  • Binding transport to IPv6 address causes IllegalFormatConversionException

    Binding transport to IPv6 address causes IllegalFormatConversionException

    When starting Scalecube and one of the addresses only has an IPv6 address (for various reasons, in my case the DHCP didn't complete, but we could really be using only an IPv6 network) then Addressing uses NetworkInterface.getInetAddresses() to get an Inet6Address which looks something like this: /fe80:0:0:0:8cf6:f5c8:c946:2c30%eno1.

    Then when MembershipProtocol constructor tries to use the address to create a "name format" for ThreadFactoryBuilder, the % in the address causes the thread factory builder to crash with a java.util.IllegalFormatConversionException when it tries to create thread names using String.format().

    The full stack trace looks like this:

    java.util.IllegalFormatConversionException: e != java.lang.Integer
    	at java.util.Formatter$FormatSpecifier.failConversion(Formatter.java:4302)
    	at java.util.Formatter$FormatSpecifier.printFloat(Formatter.java:2806)
    	at java.util.Formatter$FormatSpecifier.print(Formatter.java:2753)
    	at java.util.Formatter.format(Formatter.java:2520)
    	at java.util.Formatter.format(Formatter.java:2455)
    	at java.lang.String.format(String.java:2981)
    	at com.google.common.util.concurrent.ThreadFactoryBuilder.format(ThreadFactoryBuilder.java:181)
    	at com.google.common.util.concurrent.ThreadFactoryBuilder.setNameFormat(ThreadFactoryBuilder.java:70)
    	at io.scalecube.cluster.membership.MembershipProtocol.<init>(MembershipProtocol.java:110)
    	at io.scalecube.cluster.ClusterImpl.lambda$join0$25(ClusterImpl.java:81)
    	at java.util.concurrent.CompletableFuture.uniCompose(CompletableFuture.java:952)
    	at java.util.concurrent.CompletableFuture$UniCompose.tryFire(CompletableFuture.java:926)
    	at java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:474)
    	at java.util.concurrent.CompletableFuture.complete(CompletableFuture.java:1962)
    	at io.scalecube.transport.TransportImpl.lambda$bind0$0(TransportImpl.java:104)
    	at io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:514)
    	at io.netty.util.concurrent.DefaultPromise.notifyListenersNow(DefaultPromise.java:488)
    	at io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:427)
    	at io.netty.util.concurrent.DefaultPromise.trySuccess(DefaultPromise.java:111)
    	at io.netty.channel.DefaultChannelPromise.trySuccess(DefaultChannelPromise.java:82)
    	at io.netty.channel.AbstractChannel$AbstractUnsafe.safeSetSuccess(AbstractChannel.java:897)
    	at io.netty.channel.AbstractChannel$AbstractUnsafe.bind(AbstractChannel.java:570)
    	at io.netty.channel.DefaultChannelPipeline$HeadContext.bind(DefaultChannelPipeline.java:1258)
    	at io.netty.channel.AbstractChannelHandlerContext.invokeBind(AbstractChannelHandlerContext.java:511)
    	at io.netty.channel.AbstractChannelHandlerContext.bind(AbstractChannelHandlerContext.java:496)
    	at io.netty.channel.DefaultChannelPipeline.bind(DefaultChannelPipeline.java:980)
    	at io.netty.channel.AbstractChannel.bind(AbstractChannel.java:250)
    	at io.netty.bootstrap.AbstractBootstrap$2.run(AbstractBootstrap.java:363)
    	at io.netty.util.concurrent.AbstractEventExecutor.safeExecute(AbstractEventExecutor.java:163)
    	at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:418)
    	at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:440)
    	at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:873)
    	at io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:144)
    	at java.lang.Thread.run(Thread.java:745)
    

    Using scalecube-cluster-1.0.3

    🐞 Bug 🐞 
    opened by guss77 10
  • Cluster and transport code clean up. Part 3

    Cluster and transport code clean up. Part 3

    • Introduce new Message and migrate all components to it
    • Clean up IGossipProtocol interface and make it work with new Message
    • Clean up IClusterMembership interface
    • Clean up Cluster code and ICluster interface (without JavaDocs yet)
    opened by antonkharenko 10
  • Starting many nodes at once throw exception address already in use

    Starting many nodes at once throw exception address already in use

    Code:

        for (int i = 0 ; i < 50; i++) {
          exec.execute(new Runnable() {
            @Override
            public void run() {
              ICluster member = Cluster.joinAwait(seed.address());
            }
          });
        }
    

    Output:

    I 0921-1835:43,635 i.s.t.Transport Connected from 10.150.4.21:4816 to 10.150.4.21:4807: [id: 0xa5f9fce7, L:/10.150.4.21:51671 - R:/10.150.4.21:4807] [sc-io-52-2]
    D 0921-1835:43,667 i.s.c.m.MembershipProtocol Received membership gossip: {m: [email protected]:4803, s: SUSPECT, inc: 0} [sc-membership-10.150.4.21:4801]
    Exception in thread "pool-1-thread-8" java.lang.RuntimeException: java.net.BindException: Address already in use: bind
        at com.google.common.base.Throwables.propagate(Throwables.java:160)
        at io.scalecube.cluster.Cluster.joinAwait(Cluster.java:94)
        at io.scalecube.leaderelection.LeaderElectionIT$1.run(LeaderElectionIT.java:97)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
        at java.lang.Thread.run(Unknown Source)
    Caused by: java.net.BindException: Address already in use: bind
        at sun.nio.ch.Net.bind0(Native Method)
        at sun.nio.ch.Net.bind(Unknown Source)
        at sun.nio.ch.Net.bind(Unknown Source)
        at sun.nio.ch.ServerSocketChannelImpl.bind(Unknown Source)
        at sun.nio.ch.ServerSocketAdaptor.bind(Unknown Source)
        at io.netty.channel.socket.nio.NioServerSocketChannel.doBind(NioServerSocketChannel.java:125)
        at io.netty.channel.AbstractChannel$AbstractUnsafe.bind(AbstractChannel.java:501)
        at io.netty.channel.DefaultChannelPipeline$HeadContext.bind(DefaultChannelPipeline.java:1218)
        at io.netty.channel.AbstractChannelHandlerContext.invokeBind(AbstractChannelHandlerContext.java:505)
        at io.netty.channel.AbstractChannelHandlerContext.bind(AbstractChannelHandlerContext.java:490)
        at io.netty.channel.DefaultChannelPipeline.bind(DefaultChannelPipeline.java:965)
        at io.netty.channel.AbstractChannel.bind(AbstractChannel.java:210)
        at io.netty.bootstrap.AbstractBootstrap$2.run(AbstractBootstrap.java:353)
        at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:408)
        at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:402)
        at io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:140)
        at io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:144)
        ... 1 more
    
    🐞 Bug 🐞 
    opened by ronenhamias 9
  • Define formatting settings

    Define formatting settings

    Need to agreed about formatter settings and do next steps:

    • define default formatting settings
    • add configuration of formatter to repository,
    • add checkstyle plugin to build process
    opened by myroslavlisniak 9
  • Starting member with dynamically allocated port

    Starting member with dynamically allocated port

    Motivation: streamline testing and development and deployments with zero configuration.

    the enhancement is to enable creating instance without specifying specific port

    Current: .... ICluster clusterA = Cluster.newInstance(3000).join(); ICluster clusterA = Cluster.newInstance(3001).join(); ....

    After enhancement it will be possible to run on same vm several cluster instances: ... ICluster clusterA = Cluster.newInstance().join(); -- > starts on default port ICluster clusterB = Cluster.newInstance().join(); ---> starts on default port +1 ...

    feature ⚙️ 
    opened by ronenhamias 9
  • Use an unpooled buffer for setup payload

    Use an unpooled buffer for setup payload

    This fixes data corruption issues when reusing event loops for multiple service calls.

    A demo of the bug can be found here: https://gist.github.com/Sm0keySa1m0n/f3396a8aa3d5e093d24571ffd5da9c69

    You'll notice the byte read from the setup payload is not correct in the second service call. This was causing ClosedChannelExceptions on the client because the server was silently closing the connection after a StreamCorruptionException was thrown when trying to decode the connection setup payload. Therefore this PR will probably fix https://github.com/scalecube/scalecube-services/issues/743 and https://github.com/scalecube/scalecube-services/issues/697.

    opened by Sm0keySa1m0n 7
  • Change behavior handling of consumer/supplier in builder

    Change behavior handling of consumer/supplier in builder

    for now, I lose the previous settings when I invoke the options setter several times:

          Microservices.builder()
                .discovery(options -> options.seeds(seed.discovery().address()))
                .discovery(options -> options.port(123))
                .discovery(options -> options.memberHost("member"))
    

    I'd like to propose making it in the Microservices like this:

    public Builder discovery(Consumer<ServiceDiscoveryConfig.Builder> discoveryOptions) {
         if (this.discoveryOptions != null) {
          this.discoveryOptions = this.discoveryOptions.andThen(discoveryOptions);
        } else {
          this.discoveryOptions = discoveryOptions;
        }
    ...
    
    discussion 💬 Behavior change ⚠️ 
    opened by segabriel 7
  • Checks @Principal annotation in not-auth method

    Checks @Principal annotation in not-auth method

    I have interface:

    @Service("destination-selector")
    public interface DestinationSelector {
    
        @ServiceMethod("select")
        Mono<Destination> select(SelectContext selectContext, List<Destination> source);
    
    }
    

    My app down this exception: The second parameter can be only @Principal (optional). I not want use security in this app.

    Correctly this behavior in scalecube services?

    🐞 Bug 🐞 discussion 💬 
    opened by eutkin 6
  • Transport port binding issues causing

    Transport port binding issues causing "TransportBrokenException: Detected duplicate.."

    Transport binds and listen to all network interfaces (0.0.0.0) for specific port. On the other hand it defines its address using IpAddressResolver.resolveIpAddress() which determines local IP by looping over NetworkInterface.networkInterfaces() and taking first one IPv4 interface (excluding loopback). This is not good, because networkInterfaces() doesn't guarantee order of returning collection. This IP address then used by cluster membership. But seed members can use another network interface to connect to the same endpoint so from the point of Cluster it will be viewed as a separate addresses, but they will lead to same endpoint which causes as a result "TransportBrokenException: Detected duplicate...".

    Case 1. Several network interfaces (NI)

    On member there're 2 network interfaces. One private, one external. Our code in TransportFactory listens both interfaces.

    From logs:

    I 0727-1325:09,312 c.p.o.c.t.i.ExceptionCaughtChannelHandler Broken transport: NettyTransport{status=CONNECTED, channel=[id: 0x7ba35e97, /10.144.178.10:35789 => /10.144.176.10:4830]}, cause: TBE: Detected duplicate NettyTransport{status=READY, channel=[id: 0xbe66e44b, /172.31.178.10:57668 => /172.31.176.10:4830]} for key=tcp://172.31.178.10:4830 in accepted_map [oapi-transport-io-0@tcp://172.31.176.10:4830]
    

    Caused by the following flow:

    • Established conn 10.144.178.10:35789 => /10.144.176.10:4830 – but local address is tcp://172.31.176.10:4830 (as you can see from thread name), it means Transport listens both *.144 and *.31 addresses.
    • Then duplicate detected because there's already - NettyTransport{status=READY, channel=[id: 0xbe66e44b, /172.31.178.10:57668 => /172.31.176.10:4830]} under key tcp://172.31.178.10:4830 (i.e. connection was established earlier by interface with IP *.31, but connect attempt is made using interface with IP *.144).

    Case 2. DNS names not supported

    Env: 2 nodes with 1 NI each.

    Conditions: both nodes restarted.

    What happens: From node 1 to node 2 transport (port=59915) established, after that the new transport (port 59916) is attempting to establish, which results in TBE (duplicates not allowed: there's already 59915).

    From logs:

    I 0728-1135:13,753 i.n.h.l.LoggingHandler [id: 0xe634f142, /172.29.49.109:59915 => app10.scalecube.io/172.29.49.177:4830] ACTIVE [oapi-transport-io-0@tcp://172.29.49.109:4830]
    ...
    I 0728-1135:14,009 i.n.h.l.LoggingHandler [id: 0xebc94f97, /172.29.49.109:59916 => /172.29.49.177:4830] ACTIVE [oapi-transport-io-0@tcp://172.29.49.109:4830]
    

    Note: putting IP addresses instead of DNS names to seed members addresses on cluster initialization fixing this issue all together.

    🐞 Bug 🐞 
    opened by antonkharenko 6
  • Why Service interface and qualifier has one-to-one relationship

    Why Service interface and qualifier has one-to-one relationship

    The interface is converted to a qualifier and back. Qualifier + id nodes is a unique service key in the cluster. In my opinion, it contradicts the nature of interfaces in that we cannot have multiple implementations in one node. Why can we have multiple implementations of an interface within a cluster, but not within a node?

    For example, for versioning, we have to create a new interface every time.

    Maybe we should consider not only the interface metadata, but also the some metadata of the implementation?

    discussion 💬 
    opened by eutkin 5
  • Add support of polymorphic types

    Add support of polymorphic types

    Subj.

    Consider to write header _type conditionally to the response message. For example it's possible to deduct that class in return type (or in @Response annotation) is abstract or interface and store this polymorphic indicator (e.g. boolean morph flag) in the MethodInfo.

    Improvement ↗️ discussion 💬 
    opened by artem-v 5
  • Website expired?

    Website expired?

    The services.html, etc. pages, linked from the readme file is reported to be expired by the hoster. Fortunately, I got the idea to search a copy in the WaybackMachine/Internet Archive (there is one from January 2020).

    Even if you don't write new docs, maybe the old documenation still is helpful and could be moved to the Github wiki, or so.

    opened by vinjana 2
  • Is there any up to date documentation anywhere?

    Is there any up to date documentation anywhere?

    It looks like a lot of the examples and quickstarts were for older versions? For example .discovery(o -> o.seeds(seed.discovery().address())) is explained nowhere, and I'm not sure what it does. It's also been changed to require a String (id) and transport factory.

    I'm also having a lot of trouble figuring out what dependencies are needed because the ones listed arent even artifacts, and the ones specified that do exist are not all of what i need?

    opened by cyberpwnn 1
  • Incorrect REQUEST_CHANNEL method

    Incorrect REQUEST_CHANNEL method

    https://gist.github.com/eutkin/bdeed67520078388de89b11b00ce7812

    test for reproduce bug.

    Method with void requestChannel(Flux<String>) signature is valid, but incorrect working.

    🐞 Bug 🐞 
    opened by eutkin 0
  • Support qualifier format with version

    Support qualifier format with version

    Support new qualfier format: version/namespace/action. Version: must be in a format v{d+} character v plus 1 or more digits.

    • Old qualifier format must also work. Version for currently in use qualifiers (/namespace/action or namespace/action) must be 0.

    • Add new method to Qualifier: int getQualifierVersion()

    version/namespace/action
    ^^^^^^^
    v1 or v22 or v356  => parse  v{d+}
    
    namespace/action   or  /namespace/action
    <= 0
    

    If this is old qualfier format (/namespace/action or namespace/action) then 0 must be returned.

    • Add new annotation @Version with default int value set to 1. Validate during scanning -- don't allow negative or zero. Annotation is optional, and must be allowed only on class level.

    • @ServiceMethod considerations: Qualifier on service method annotation must not include version. Only current behavior is allowed -- to override namespace and action.


    Example:

    @Version(1)
    @Service("auth")
    interface AuthService {
    
      @ServiceMethod
      Response createAccessKey(Request)
    }
    

    Expected qualfier: v1/auth/createAccessKey


    @Version(0)
    @Service("auth")
    interface AuthService {
    
      @ServiceMethod
      Response createAccessKey(Request)
    }
    

    Expected: exception saying it's not allowed to set 0 as a version. **NOTE: 0 is registered value, it's used to denote absence of @Version annotation.


    @Version(1)
    @Service("auth")
    interface AuthService {
    
      @ServiceMethod
      Response createAccessKey(Request)
    
      @ServiceMethod("v2/auth/create_access_key")
      Response createAccessKey2(Request)
    }
    
    @Service("auth")
    interface AuthService {
    
      @ServiceMethod("v1/auth/create_access_key")
      Response createAccessKey(Request)
    }
    

    Expected: exception in both examples saying it's not allowed to set versions on service method.

    opened by artem-v 0
  • Unstable test ServiceCallRemoteTest.test_many_stream_block_first()

    Unstable test ServiceCallRemoteTest.test_many_stream_block_first()

    Subj.

    I 0516-1440:08,512 i.s.s.BaseTest ***** Test started  : ServiceCallRemoteTest.test_many_stream_block_first() ***** [main]
    W 0516-1440:08,586 r.c.p.Operators Cannot further apply discard hook while discarding and clearing a queue [parallel-7]
    java.lang.IndexOutOfBoundsException: 0
    	at java.util.stream.SpinedBuffer.get(SpinedBuffer.java:170) ~[?:?]
    	at java.util.stream.StreamSpliterators$WrappingSpliterator.tryAdvance(StreamSpliterators.java:302) ~[?:?]
    	at java.util.Spliterators$1Adapter.hasNext(Spliterators.java:681) ~[?:?]
    	at reactor.core.publisher.FluxIterable$IterableSubscription.isEmpty(FluxIterable.java:401) ~[reactor-core-3.3.5.RELEASE.jar:3.3.5.RELEASE]
    	at reactor.core.publisher.FluxIterable$IterableSubscription.poll(FluxIterable.java:412) ~[reactor-core-3.3.5.RELEASE.jar:3.3.5.RELEASE]
    	at reactor.core.publisher.Operators.onDiscardQueueWithClear(Operators.java:455) [reactor-core-3.3.5.RELEASE.jar:3.3.5.RELEASE]
    	at reactor.core.publisher.FluxPublishOn$PublishOnConditionalSubscriber.runSync(FluxPublishOn.java:915) [reactor-core-3.3.5.RELEASE.jar:3.3.5.RELEASE]
    	at reactor.core.publisher.FluxPublishOn$PublishOnConditionalSubscriber.run(FluxPublishOn.java:1062) [reactor-core-3.3.5.RELEASE.jar:3.3.5.RELEASE]
    	at reactor.core.scheduler.WorkerTask.call(WorkerTask.java:84) [reactor-core-3.3.5.RELEASE.jar:3.3.5.RELEASE]
    	at reactor.core.scheduler.WorkerTask.call(WorkerTask.java:37) [reactor-core-3.3.5.RELEASE.jar:3.3.5.RELEASE]
    	at java.util.concurrent.FutureTask.run(FutureTask.java:264) [?:?]
    	at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:304) [?:?]
    	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) [?:?]
    	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) [?:?]
    	at java.lang.Thread.run(Thread.java:834) [?:?]
    W 0516-1440:08,589 r.c.p.Operators Cannot further apply discard hook while discarding and clearing a queue [parallel-8]
    java.lang.IndexOutOfBoundsException: 1
    	at java.util.stream.SpinedBuffer.get(SpinedBuffer.java:170) ~[?:?]
    	at java.util.stream.StreamSpliterators$WrappingSpliterator.tryAdvance(StreamSpliterators.java:302) ~[?:?]
    	at java.util.Spliterators$1Adapter.hasNext(Spliterators.java:681) ~[?:?]
    	at reactor.core.publisher.FluxIterable$IterableSubscription.isEmpty(FluxIterable.java:401) ~[reactor-core-3.3.5.RELEASE.jar:3.3.5.RELEASE]
    	at reactor.core.publisher.FluxIterable$IterableSubscription.poll(FluxIterable.java:412) ~[reactor-core-3.3.5.RELEASE.jar:3.3.5.RELEASE]
    	at reactor.core.publisher.Operators.onDiscardQueueWithClear(Operators.java:455) [reactor-core-3.3.5.RELEASE.jar:3.3.5.RELEASE]
    	at reactor.core.publisher.FluxPublishOn$PublishOnConditionalSubscriber.runSync(FluxPublishOn.java:915) [reactor-core-3.3.5.RELEASE.jar:3.3.5.RELEASE]
    	at reactor.core.publisher.FluxPublishOn$PublishOnConditionalSubscriber.run(FluxPublishOn.java:1062) [reactor-core-3.3.5.RELEASE.jar:3.3.5.RELEASE]
    	at reactor.core.scheduler.WorkerTask.call(WorkerTask.java:84) [reactor-core-3.3.5.RELEASE.jar:3.3.5.RELEASE]
    	at reactor.core.scheduler.WorkerTask.call(WorkerTask.java:37) [reactor-core-3.3.5.RELEASE.jar:3.3.5.RELEASE]
    	at java.util.concurrent.FutureTask.run(FutureTask.java:264) [?:?]
    	at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:304) [?:?]
    	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) [?:?]
    	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) [?:?]
    	at java.lang.Thread.run(Thread.java:834) [?:?]
    
    🐞 Bug 🐞 
    opened by artem-v 0
Releases(2.10.24)
Owner
SCΛLΞ CUBΞ
⚛ High-Speed ⚛ Cloud-Native ⚛ Reactive Microservices ⚛
SCΛLΞ CUBΞ
Netflix, Inc. 23.1k Jan 5, 2023
A reactive dataflow engine, a data stream processing framework using Vert.x

?? NeonBee Core NeonBee is an open source reactive dataflow engine, a data stream processing framework using Vert.x. Description NeonBee abstracts mos

SAP 33 Jan 4, 2023
Reactive Microservices for the JVM

Lagom - The Reactive Microservices Framework Lagom is a Swedish word meaning just right, sufficient. Microservices are about creating services that ar

Lagom Framework 2.6k Dec 30, 2022
Distributed Stream and Batch Processing

What is Jet Jet is an open-source, in-memory, distributed batch and stream processing engine. You can use it to process large volumes of real-time eve

hazelcast 1k Dec 31, 2022
APM, (Application Performance Management) tool for large-scale distributed systems.

Visit our official web site for more information and Latest updates on Pinpoint. Latest Release (2020/01/21) We're happy to announce the release of Pi

null 12.5k Dec 29, 2022
a reverse proxy load balancer using Java. Inspired by Nginx.

Project Outline: Project Main coding reverse proxy support configuration adding unit test works on Websocket Stress Test compared to Nginx load balanc

Feng 12 Aug 5, 2022
Resilience4j is a fault tolerance library designed for Java8 and functional programming

Fault tolerance library designed for functional programming Table of Contents 1. Introduction 2. Documentation 3. Overview 4. Resilience patterns 5. S

Resilience4j 8.5k Jan 2, 2023
A reactive Java framework for building fault-tolerant distributed systems

Atomix Website | Javadoc | Slack | Google Group A reactive Java framework for building fault-tolerant distributed systems Please see the website for f

Atomix 2.3k Dec 29, 2022
Vert.x is a tool-kit for building reactive applications on the JVM

Vert.x Core This is the repository for Vert.x core. Vert.x core contains fairly low-level functionality, including support for HTTP, TCP, file system

Eclipse Vert.x 13.3k Jan 8, 2023
a blockchain network simulator aimed at researching consensus algorithms for performance and security

Just Another Blockchain Simulator JABS - Just Another Blockchain Simulator. JABS is a blockchain network simulator aimed at researching consensus algo

null 49 Jan 1, 2023
Zuul is a gateway service that provides dynamic routing, monitoring, resiliency, security, and more.

Zuul Zuul is an L7 application gateway that provides capabilities for dynamic routing, monitoring, resiliency, security, and more. Please view the wik

Netflix, Inc. 12.4k Jan 3, 2023
Open Source In-Memory Data Grid

Hazelcast Hazelcast is an open-source distributed in-memory data store and computation platform. It provides a wide variety of distributed data struct

hazelcast 5.2k Dec 31, 2022
BitTorrent library and client with DHT, magnet links, encryption and more

Bt A full-featured BitTorrent implementation in Java 8 peer exchange | magnet links | DHT | encryption | LSD | private trackers | extended protocol |

Andrei Tomashpolskiy 2.1k Jan 2, 2023
Fault tolerance and resilience patterns for the JVM

Failsafe Failsafe is a lightweight, zero-dependency library for handling failures in Java 8+, with a concise API for handling everyday use cases and t

Jonathan Halterman 3.9k Dec 29, 2022
Fibers, Channels and Actors for the JVM

Quasar Fibers, Channels and Actors for the JVM Getting started Add the following Maven/Gradle dependencies: Feature Artifact Core (required) co.parall

Parallel Universe 4.5k Dec 25, 2022
Build highly concurrent, distributed, and resilient message-driven applications on the JVM

Akka We believe that writing correct concurrent & distributed, resilient and elastic applications is too hard. Most of the time it's because we are us

Akka Project 12.6k Jan 3, 2023
Simple and lightweight sip server to create voice robots, based on vert.x

Overview Lightweight SIP application built on vert.x. It's intended to be used as addon for full-featured PBX to implement programmable voice scenario

Ivoice Technology 7 May 15, 2022
Apache Mesos is a cluster manager that provides efficient resource isolation and sharing across distributed applications, or frameworks

Apache Mesos is a cluster manager that provides efficient resource isolation and sharing across distributed applications, or frameworks. It can run Hadoop, Jenkins, Spark, Aurora, and other frameworks on a dynamically shared pool of nodes.

The Apache Software Foundation 5k Dec 31, 2022