Zuul is a gateway service that provides dynamic routing, monitoring, resiliency, security, and more.

Overview

Zuul

Build Status

Zuul is an L7 application gateway that provides capabilities for dynamic routing, monitoring, resiliency, security, and more. Please view the wiki for usage, information, HOWTO, etc https://github.com/Netflix/zuul/wiki

Here are some links to help you learn more about the Zuul Project. Feel free to PR to add any other info, presentations, etc.


Articles from Netflix:

Zuul 1: http://techblog.netflix.com/2013/06/announcing-zuul-edge-service-in-cloud.html

Zuul 2:

https://medium.com/netflix-techblog/open-sourcing-zuul-2-82ea476cb2b3

http://techblog.netflix.com/2016/09/zuul-2-netflix-journey-to-asynchronous.html


Netflix presentations about Zuul:

Strange Loop 2017 - Zuul 2: https://youtu.be/2oXqbLhMS_A

AWS re:Invent 2018 - Scaling push messaging for millions of Netflix devices: https://youtu.be/IdR6N9B-S1E


Slides from Netflix presentations about Zuul:

http://www.slideshare.net/MikeyCohen1/zuul-netflix-springone-platform

http://www.slideshare.net/MikeyCohen1/rethinking-cloud-proxies-54923218

https://github.com/strangeloop/StrangeLoop2017/blob/master/slides/ArthurGonigberg-ZuulsJourneyToNonBlocking.pdf

https://www.slideshare.net/SusheelAroskar/scaling-push-messaging-for-millions-of-netflix-devices


Projects Using Zuul:

https://cloud.spring.io/

https://jhipster.github.io/


Info and examples from various projects:

https://spring.io/guides/gs/routing-and-filtering/

http://www.baeldung.com/spring-rest-with-zuul-proxy

https://blog.heroku.com/using_netflix_zuul_to_proxy_your_microservices

http://blog.ippon.tech/jhipster-3-0-introducing-microservices/


Other blog posts about Zuul:

https://engineering.riotgames.com/news/riot-games-api-fulfilling-zuuls-destiny

https://engineering.riotgames.com/news/riot-games-api-deep-dive

http://instea.sk/2015/04/netflix-zuul-vs-nginx-performance/


Comments
  • How do I enable ZUUL_DEBUG, REQUEST_DEBUG logging.

    How do I enable ZUUL_DEBUG, REQUEST_DEBUG logging.

    According to this: https://github.com/Netflix/zuul/wiki/How-To-Use ZUUL_DEBUG and REQUEST_DEBUG should be enabled by default? However, going through our logs we just get INFO level logging. What do I need to change?

    opened by micheal-swiggs 18
  • com.netflix.zuul.netty.connectionpool.OriginConnectException: Origin server inactive

    com.netflix.zuul.netty.connectionpool.OriginConnectException: Origin server inactive

    2019-03-12 16:49:36,125 WARN com.netflix.zuul.filters.endpoint.ProxyEndpoint [Salamander-ClientToZuulWorker-4] FAILURE_ORIGIN_RESET_CONNECTION, origin = wallet, origin channel info = Channel: [id: 0x7884ebae, L:0.0.0.0/0.0.0.0:53476 ! R:/172.25.199.169:8072], active=false, open=false, registered=true, writable=false, id=7884ebae, Passport: CurrentPassport {start_ms=1552380576107, [+0=SERVER_CH_ACTIVE, +6134400=IN_REQ_HEADERS_RECEIVED, +6160100=FILTERS_INBOUND_START, +6179100=FILTERS_INBOUND_END, +6215300=ORIGIN_CONN_ACQUIRE_START, +6216800=ORIGIN_CONN_ACQUIRE_END, +6224300=OUT_REQ_HEADERS_SENDING, +9490300=OUT_REQ_HEADERS_SENT, +9504700=IN_REQ_LAST_CONTENT_RECEIVED, +9514400=OUT_REQ_LAST_CONTENT_SENDING, +9518300=OUT_REQ_LAST_CONTENT_SENT, +18110400=ORIGIN_CH_INACTIVE, +18126800=NOW]} com.netflix.zuul.netty.connectionpool.OriginConnectException: Origin server inactive

    a problem like this ,how it hanppened and what I can do

    question 2.x 
    opened by Harlan6 15
  • Launching zuul-netflix webapp fails with java.lang.ArrayIndexOutOfBoundsException

    Launching zuul-netflix webapp fails with java.lang.ArrayIndexOutOfBoundsException

    When running zuul-netflix-webapp with ../gradlew jettyRun on my system (Ubuntu 14.04.1 with Oracle JDK 1.7.0_72-b14), the launch fails with an ArrayIndexOutOfBoundsException reproduced in full at the end of this issue report. This is apparently due to a dependency on turbine-core that does not fix a particular version; when this started failing on my system on Friday, I traced it to turbine-core-2.0.0-DP.1.jar.

    Unlike the other dependencies specified in the netflix-webapp subdirectory (which specify a particular library version), the Turbine dependency is declared as

    compile 'com.netflix.turbine:turbine-core:[0.4,)'
    

    Changing this to a fixed, lower version of turbine-core fixes this problem. (I tried 0.5 in my tests, which worked fine.)

    > Building > :zuul-netflix-webapp:jettyRun > Starting Jetty > Resolving dependencies ':zuul-netflix-webapp:zuul-netflix-webapp:jettyRun
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:file:/home/user/.gradle/wrapper/dists/gradle-1.1-bin/13d7lnhcrghv2i5e54el41jpgr/gradle-1.1/lib/logback-classic-1.0.6.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: Found binding in [jar:file:/home/user/.gradle/caches/artifacts-14/filestore/org.slf4j/slf4j-log4j12/1.7.2/jar/7539c264413b9b1ff9841cd00058c974b7cd1ec9/slf4j-log4j12-1.7.2.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: Found binding in [jar:file:/home/user/.gradle/caches/artifacts-14/filestore/org.slf4j/slf4j-simple/1.7.7/jar/8095d0b9f7e0a9cd79a663c740e0f8fb31d0e2c8/slf4j-simple-1.7.7.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
     WARN 16:23:53,061 No URLs will be polled as dynamic configuration sources.
     INFO 16:23:53,174 To enable URLs as dynamic configuration sources, define System property archaius.configurationSource.additionalUrls or make config.properties available on classpath.
     INFO 16:23:53,195 DynamicPropertyFactory is initialized with configuration sources: com.netflix.config.ConcurrentCompositeConfiguration@48c5241f
     INFO 16:23:53,207 Loading application properties with app id: zuul and environment: null
     INFO 16:23:53,230 Loaded properties file file:/home/user/foreign-repos/zuul/zuul-netflix-webapp/build/resources/main/zuul.properties
     INFO 16:23:53,233 Creating a new governator classpath scanner with base packages: [com.netflix]
    Could not instantiate listener com.netflix.zuul.StartServer
    java.lang.ArrayIndexOutOfBoundsException: 23884
        at org.objectweb.asm.ClassReader.<init>(Unknown Source)
        at org.objectweb.asm.ClassReader.<init>(Unknown Source)
        at org.objectweb.asm.ClassReader.<init>(Unknown Source)
        at org.apache.xbean.finder.AnnotationFinder.readClassDef(AnnotationFinder.java:957)
        at org.apache.xbean.finder.AnnotationFinder.<init>(AnnotationFinder.java:120)
        at com.netflix.governator.lifecycle.ClasspathScanner.doScanning(ClasspathScanner.java:141)
        at com.netflix.governator.lifecycle.ClasspathScanner.<init>(ClasspathScanner.java:82)
        at com.netflix.governator.lifecycle.ClasspathScanner.<init>(ClasspathScanner.java:61)
        at com.netflix.governator.guice.LifecycleInjector.createStandardClasspathScanner(LifecycleInjector.java:286)
        at com.netflix.karyon.server.ServerBootstrap.initialize(ServerBootstrap.java:114)
        at com.netflix.karyon.server.KaryonServer.<init>(KaryonServer.java:166)
        at com.netflix.karyon.server.KaryonServer.<init>(KaryonServer.java:138)
        at com.netflix.zuul.StartServer.<init>(StartServer.java:84)
        at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
        at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
        at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
        at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
        at java.lang.Class.newInstance(Class.java:374)
        at org.mortbay.jetty.webapp.WebXmlConfiguration.newListenerInstance(WebXmlConfiguration.java:650)
        at org.mortbay.jetty.webapp.WebXmlConfiguration.initListener(WebXmlConfiguration.java:631)
        at org.mortbay.jetty.webapp.WebXmlConfiguration.initWebXmlElement(WebXmlConfiguration.java:368)
        at org.mortbay.jetty.plus.webapp.AbstractConfiguration.initWebXmlElement(AbstractConfiguration.java:190)
        at org.mortbay.jetty.webapp.WebXmlConfiguration.initialize(WebXmlConfiguration.java:289)
        at org.mortbay.jetty.plus.webapp.AbstractConfiguration.initialize(AbstractConfiguration.java:133)
        at org.mortbay.jetty.webapp.WebXmlConfiguration.configure(WebXmlConfiguration.java:222)
        at org.mortbay.jetty.plus.webapp.AbstractConfiguration.configure(AbstractConfiguration.java:113)
        at org.mortbay.jetty.webapp.WebXmlConfiguration.configureWebApp(WebXmlConfiguration.java:180)
        at org.mortbay.jetty.plus.webapp.AbstractConfiguration.configureWebApp(AbstractConfiguration.java:96)
        at org.mortbay.jetty.plus.webapp.Configuration.configureWebApp(Configuration.java:149)
        at org.mortbay.jetty.webapp.WebAppContext.startContext(WebAppContext.java:1269)
        at org.mortbay.jetty.handler.ContextHandler.doStart(ContextHandler.java:517)
        at org.mortbay.jetty.webapp.WebAppContext.doStart(WebAppContext.java:489)
        at org.gradle.api.plugins.jetty.internal.JettyPluginWebAppContext.doStart(JettyPluginWebAppContext.java:112)
        at org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
        at org.mortbay.jetty.handler.HandlerCollection.doStart(HandlerCollection.java:152)
        at org.mortbay.jetty.handler.ContextHandlerCollection.doStart(ContextHandlerCollection.java:156)
        at org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
        at org.mortbay.jetty.handler.HandlerCollection.doStart(HandlerCollection.java:152)
        at org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
        at org.mortbay.jetty.handler.HandlerWrapper.doStart(HandlerWrapper.java:130)
        at org.mortbay.jetty.Server.doStart(Server.java:224)
        at org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
        at org.gradle.api.plugins.jetty.internal.Jetty6PluginServer.start(Jetty6PluginServer.java:111)
        at org.gradle.api.plugins.jetty.AbstractJettyRunTask.startJettyInternal(AbstractJettyRunTask.java:247)
        at org.gradle.api.plugins.jetty.AbstractJettyRunTask.startJetty(AbstractJettyRunTask.java:198)
        at org.gradle.api.plugins.jetty.AbstractJettyRunTask.start(AbstractJettyRunTask.java:169)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:606)
        at org.codehaus.groovy.reflection.CachedMethod.invoke(CachedMethod.java:90)
        at groovy.lang.MetaMethod.doMethodInvoke(MetaMethod.java:233)
        at groovy.lang.MetaClassImpl.invokeMethod(MetaClassImpl.java:1047)
        at groovy.lang.MetaClassImpl.invokeMethod(MetaClassImpl.java:877)
        at org.gradle.api.internal.BeanDynamicObject$MetaClassAdapter.invokeMethod(BeanDynamicObject.java:196)
        at org.gradle.api.internal.BeanDynamicObject.invokeMethod(BeanDynamicObject.java:102)
        at org.gradle.api.internal.CompositeDynamicObject.invokeMethod(CompositeDynamicObject.java:99)
        at org.gradle.api.plugins.jetty.JettyRun_Decorated.invokeMethod(Unknown Source)
        at groovy.lang.GroovyObject$invokeMethod.call(Unknown Source)
        at org.gradle.util.ReflectionUtil.invoke(ReflectionUtil.groovy:23)
        at org.gradle.api.internal.project.taskfactory.AnnotationProcessingTaskFactory$4.execute(AnnotationProcessingTaskFactory.java:150)
        at org.gradle.api.internal.project.taskfactory.AnnotationProcessingTaskFactory$4.execute(AnnotationProcessingTaskFactory.java:145)
        at org.gradle.api.internal.AbstractTask$TaskActionWrapper.execute(AbstractTask.java:472)
        at org.gradle.api.internal.AbstractTask$TaskActionWrapper.execute(AbstractTask.java:461)
        at org.gradle.api.internal.tasks.execution.ExecuteActionsTaskExecuter.executeActions(ExecuteActionsTaskExecuter.java:60)
        at org.gradle.api.internal.tasks.execution.ExecuteActionsTaskExecuter.execute(ExecuteActionsTaskExecuter.java:46)
        at org.gradle.api.internal.tasks.execution.PostExecutionAnalysisTaskExecuter.execute(PostExecutionAnalysisTaskExecuter.java:34)
        at org.gradle.api.internal.changedetection.CacheLockHandlingTaskExecuter$1.run(CacheLockHandlingTaskExecuter.java:34)
        at org.gradle.cache.internal.DefaultCacheAccess$2.create(DefaultCacheAccess.java:200)
        at org.gradle.cache.internal.DefaultCacheAccess.longRunningOperation(DefaultCacheAccess.java:172)
        at org.gradle.cache.internal.DefaultCacheAccess.longRunningOperation(DefaultCacheAccess.java:198)
        at org.gradle.cache.internal.DefaultPersistentDirectoryStore.longRunningOperation(DefaultPersistentDirectoryStore.java:137)
        at org.gradle.api.internal.changedetection.DefaultTaskArtifactStateCacheAccess.longRunningOperation(DefaultTaskArtifactStateCacheAccess.java:83)
        at org.gradle.api.internal.changedetection.CacheLockHandlingTaskExecuter.execute(CacheLockHandlingTaskExecuter.java:32)
        at org.gradle.api.internal.tasks.execution.SkipUpToDateTaskExecuter.execute(SkipUpToDateTaskExecuter.java:55)
        at org.gradle.api.internal.tasks.execution.ValidatingTaskExecuter.execute(ValidatingTaskExecuter.java:57)
        at org.gradle.api.internal.tasks.execution.SkipEmptySourceFilesTaskExecuter.execute(SkipEmptySourceFilesTaskExecuter.java:41)
        at org.gradle.api.internal.tasks.execution.SkipTaskWithNoActionsExecuter.execute(SkipTaskWithNoActionsExecuter.java:51)
        at org.gradle.api.internal.tasks.execution.SkipOnlyIfTaskExecuter.execute(SkipOnlyIfTaskExecuter.java:52)
        at org.gradle.api.internal.tasks.execution.ExecuteAtMostOnceTaskExecuter.execute(ExecuteAtMostOnceTaskExecuter.java:42)
        at org.gradle.api.internal.AbstractTask.executeWithoutThrowingTaskFailure(AbstractTask.java:247)
        at org.gradle.execution.DefaultTaskGraphExecuter.executeTask(DefaultTaskGraphExecuter.java:192)
        at org.gradle.execution.DefaultTaskGraphExecuter.doExecute(DefaultTaskGraphExecuter.java:177)
        at org.gradle.execution.DefaultTaskGraphExecuter.execute(DefaultTaskGraphExecuter.java:83)
        at org.gradle.execution.SelectedTaskExecutionAction.execute(SelectedTaskExecutionAction.java:36)
        at org.gradle.execution.DefaultBuildExecuter.execute(DefaultBuildExecuter.java:61)
        at org.gradle.execution.DefaultBuildExecuter.access$200(DefaultBuildExecuter.java:23)
        at org.gradle.execution.DefaultBuildExecuter$2.proceed(DefaultBuildExecuter.java:67)
        at org.gradle.api.internal.changedetection.TaskCacheLockHandlingBuildExecuter$1.run(TaskCacheLockHandlingBuildExecuter.java:31)
        at org.gradle.cache.internal.DefaultCacheAccess$1.create(DefaultCacheAccess.java:111)
        at org.gradle.cache.internal.DefaultCacheAccess.useCache(DefaultCacheAccess.java:126)
        at org.gradle.cache.internal.DefaultCacheAccess.useCache(DefaultCacheAccess.java:109)
        at org.gradle.cache.internal.DefaultPersistentDirectoryStore.useCache(DefaultPersistentDirectoryStore.java:129)
        at org.gradle.api.internal.changedetection.DefaultTaskArtifactStateCacheAccess.useCache(DefaultTaskArtifactStateCacheAccess.java:79)
        at org.gradle.api.internal.changedetection.TaskCacheLockHandlingBuildExecuter.execute(TaskCacheLockHandlingBuildExecuter.java:29)
        at org.gradle.execution.DefaultBuildExecuter.execute(DefaultBuildExecuter.java:61)
        at org.gradle.execution.DefaultBuildExecuter.access$200(DefaultBuildExecuter.java:23)
        at org.gradle.execution.DefaultBuildExecuter$2.proceed(DefaultBuildExecuter.java:67)
        at org.gradle.execution.DryRunBuildExecutionAction.execute(DryRunBuildExecutionAction.java:32)
        at org.gradle.execution.DefaultBuildExecuter.execute(DefaultBuildExecuter.java:61)
        at org.gradle.execution.DefaultBuildExecuter.execute(DefaultBuildExecuter.java:54)
        at org.gradle.initialization.DefaultGradleLauncher.doBuildStages(DefaultGradleLauncher.java:155)
        at org.gradle.initialization.DefaultGradleLauncher.doBuild(DefaultGradleLauncher.java:110)
        at org.gradle.initialization.DefaultGradleLauncher.run(DefaultGradleLauncher.java:78)
        at org.gradle.launcher.cli.ExecuteBuildAction.run(ExecuteBuildAction.java:38)
        at org.gradle.launcher.exec.InProcessGradleLauncherActionExecuter.execute(InProcessGradleLauncherActionExecuter.java:39)
        at org.gradle.launcher.exec.InProcessGradleLauncherActionExecuter.execute(InProcessGradleLauncherActionExecuter.java:25)
        at org.gradle.launcher.cli.RunBuildAction.run(RunBuildAction.java:50)
        at org.gradle.launcher.cli.ActionAdapter.execute(ActionAdapter.java:30)
        at org.gradle.launcher.cli.ActionAdapter.execute(ActionAdapter.java:22)
        at org.gradle.launcher.cli.CommandLineActionFactory$ParseAndBuildAction.execute(CommandLineActionFactory.java:200)
        at org.gradle.launcher.cli.CommandLineActionFactory$ParseAndBuildAction.execute(CommandLineActionFactory.java:173)
        at org.gradle.launcher.cli.CommandLineActionFactory$WithLogging.execute(CommandLineActionFactory.java:169)
        at org.gradle.launcher.cli.CommandLineActionFactory$WithLogging.execute(CommandLineActionFactory.java:138)
        at org.gradle.launcher.cli.ExceptionReportingAction.execute(ExceptionReportingAction.java:33)
        at org.gradle.launcher.cli.ExceptionReportingAction.execute(ExceptionReportingAction.java:22)
        at org.gradle.launcher.Main.doAction(Main.java:48)
        at org.gradle.launcher.bootstrap.EntryPoint.run(EntryPoint.java:45)
        at org.gradle.launcher.Main.main(Main.java:39)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:606)
        at org.gradle.launcher.bootstrap.ProcessBootstrap.runNoExit(ProcessBootstrap.java:50)
        at org.gradle.launcher.bootstrap.ProcessBootstrap.run(ProcessBootstrap.java:32)
        at org.gradle.launcher.GradleMain.main(GradleMain.java:26)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:606)
        at org.gradle.wrapper.BootstrapMainStarter.start(BootstrapMainStarter.java:33)
        at org.gradle.wrapper.WrapperExecutor.execute(WrapperExecutor.java:130)
        at org.gradle.wrapper.GradleWrapperMain.main(GradleWrapperMain.java:48)
    > Building > :zuul-netflix-webapp:jettyRun > Running at http://localhost:8080//
    
    opened by pataprogramming 14
  • Zuul2: Routing without using Eureka service discovery

    Zuul2: Routing without using Eureka service discovery

    Hello,

    We want to use Zuul without service discovery, for instance, defining properties such as zuul.routes.books.url = http://localhost:8090/books, and routing accordingly with a custom filter. But, I can't figure out how to disable eureka and perform routing via custom filters. zuul-core has dependencies to ribbon-eureka and eureka-client, and disabling eureka seems impossible to me.

    Is it possible? OR Are you considering to release a zuul-without-eureka?

    2.x 
    opened by mmdemirbas 13
  • Streaming file download through Zuul

    Streaming file download through Zuul

    Basically I'm trying to stream a file download through Zuul. When calling the service directly, the download streams fine, however when calling through Zuul it seems to download the file to zuul first and then stream it from there. For a large file this results in a "waiting connection" loop in the browser while zuul fetches the file AND THEN streams it out. Is there a way to have the stream be "passed through" zuul? I've tried custom response writing filters that specify Transfer-Encoding: Chunked, and I've set response headers etc, is there anything that I'm missing?

    opened by ali2992 12
  • Best Practices: Error Responses from Inbound filters

    Best Practices: Error Responses from Inbound filters

    Are there best practices around how to respond from inbound filters. Let's say for example, there is a inbound filter to check if a particular route exists and if not it needs to respond back with a 404. The only way to do this right now seems to be to create a new endpoint filter that'd return a 404 and then set that as the endpoint on the request context? Isn't there a better way to do this? Like just throw an exception with say a 404 code or something?

    question 2.x 
    opened by sandy-adi 11
  • One question & one problem

    One question & one problem

    Dear artgon & zuul team,

    Question :

    I saw the code snippet in SampleService.java, public Observable makeSlowRequest() { return Observable.just("test").delay(500, TimeUnit.MILLISECONDS); } It runs on :
    filter.applyAsync(inMesg) .observeOn(Schedulers.from(getChannelHandlerContext(inMesg).executor())) .doOnUnsubscribe(resumer::decrementConcurrency) .subscribe(resumer); In my local machine, I only have 8 worker threads and RxJava will use one of them & blocks the thread. if many connections come in, e.g. 10 http requests sent within 500milliseconds, it must have 2 requests waiting there until two of 8/10 finish. Should I use another executor to handle the slowRequest? such as public Observable makeSlowRequest() { return Observable.just("test").map(sync_service_code).subscribeOn(Schedulers.io()); }

    Problem :

    I configured in application.properties like below :

    // Load balancing backends with Eureka

    //eureka.shouldUseDns=true //eureka.eurekaServer.context=discovery/v2 //eureka.eurekaServer.domainName=discovery${environment}.netflix.net //eureka.eurekaServer.gzipContent=true

    //eureka.serviceUrl.default=http://${region}.${eureka.eurekaServer.domainName}:7001/${eureka.eurekaServer.context}

    //api.ribbon.NIWSServerListClassName=com.netflix.niws.loadbalancer.DiscoveryEnabledNIWSServerList //api.ribbon.DeploymentContextBasedVipAddresses=api-test.netflix.net:7001

    ////// Load balancing backends without Eureka

    eureka.shouldFetchRegistry=false

    api.ribbon.listOfServers=127.0.0.1:8889 api.ribbon.client.NIWSServerListClassName=com.netflix.loadbalancer.ConfigurationBasedServerList api.ribbon.DeploymentContextBasedVipAddresses=api-test.netflix.net:7001

    And start up the zuul2.1 server, then I run command on my mac 'ab -c 100 -n 100 -k 'http://localhost:7001/api/hello'' After I get the tests results, I go to my web browser and type in http://localhost:7001/api/hello, the http request will always get error code 503. I look at my zuul2.1 server logs, it says :

    2018-04-26 13:22:14,711 WARN com.netflix.zuul.netty.connectionpool.PerServerConnectionPool [Salamander-ClientToZuulWorker-5] Unable to create new connection because at MaxConnectionsPerHost! maxConnectionsPerHost=50, connectionsPerHost=50, host=aaaaa.dianrong.com:8889origin=api 2018-04-26 13:22:14,712 WARN com.netflix.zuul.filters.endpoint.ProxyEndpoint [Salamander-ClientToZuulWorker-5] FAILURE_LOCAL_THROTTLED_ORIGIN_SERVER_MAXCONN, origin = api, origin channel info = com.netflix.zuul.netty.connectionpool.OriginConnectException: maxConnectionsPerHost=50, connectionsPerHost=50 2018-04-26 13:22:14,712 INFO com.netflix.zuul.sample.filters.outbound.ZuulResponseFilter [Salamander-ClientToZuulWorker-5] Passport: CurrentPassport {start_ms=1524720134710, [+0=SERVER_CH_ACTIVE, +310536=IN_REQ_HEADERS_RECEIVED, +663597=FILTERS_INBOUND_START, +1057463=FILTERS_INBOUND_END, +1624557=ORIGIN_CONN_ACQUIRE_START, +1682176=ORIGIN_CONN_ACQUIRE_FAILED, +1967045=FILTERS_OUTBOUND_START, +2326851=NOW]} 2018-04-26 13:22:14,712 WARN com.netflix.zuul.netty.server.ClientResponseWriter [Salamander-ClientToZuulWorker-5] Writing response to client channel before have received the LastContent of request! uri=http://localhost:7001/api/hello, method=get, clientip=0:0:0:0:0:0:0:1, Channel: [id: 0x0612891b, L:/0:0:0:0:0:0:0:1:7001 - R:/0:0:0:0:0:0:0:1:63491], active=true, open=true, registered=true, writable=true, id=0612891b, Passport: CurrentPassport {start_ms=1524720134710, [+0=SERVER_CH_ACTIVE, +310536=IN_REQ_HEADERS_RECEIVED, +663597=FILTERS_INBOUND_START, +1057463=FILTERS_INBOUND_END, +1624557=ORIGIN_CONN_ACQUIRE_START, +1682176=ORIGIN_CONN_ACQUIRE_FAILED, +1967045=FILTERS_OUTBOUND_START, +2428057=FILTERS_OUTBOUND_END, +2522462=NOW]} 2018-04-26 13:22:14,713 INFO ACCESS [Salamander-ClientToZuulWorker-5] 2018-04-26T13:22:14.71 0:0:0:0:0:0:0:1 7001 GET /api/hello 503 3027 - 7b1e09cb-7417-4e74-8a2c-bd4028cb123f "localhost:7001" "-" "-" "-" "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_12_6) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/65.0.3325.181 Safari/537.36" "-" "-" "-"

    I debug into it and I found that it will always tryMakingNewConnection in PerServerConnectionPool.java and never auto-recover util I restart server again..

    Could you help me to find out why I encounter the problem ?

    2.x 
    opened by Agileaq 11
  • Namespace confusion is happening with OpenStack's Zuul project

    Namespace confusion is happening with OpenStack's Zuul project

    Hi! I work on the Zuul project started by OpenStack developers back in 2012:

    https://github.com/openstack-infra/zuul/tree/1.1.0

    We've known for a while that there was a name collision when Netflix released its Zuul project. However, we may need to do something to disambiguate, as we have at least one example of confusion around the name:

    https://thenewstack.io/ibm-openstack-engineer-urges-cncf-consider-augmenting-jenkins-zuul/

    We should work together to find a solution to this. Thanks!

    opened by SpamapS 10
  • Intercepting and injecting parameter in the request body

    Intercepting and injecting parameter in the request body

    Hi to all, I'm trying to intercept a call directed to zuul through a route filter, to inject a parameter in the body. I've written the following code:

        public Object run() {
            RequestContext ctx = RequestContext.getCurrentContext();
            HttpServletRequest request = ctx.getRequest();
    
            logger.info(String.format("%s request to %s", request.getMethod(), request.getRequestURL().toString()));
            String uri = request.getRequestURI();
    
            List<String> parameterList = environment.getProperty("zuul.routes." + ctx.get("proxy") + ".bodyParameterInjector.parametersList", List.class);
            Map<String, String[]> extras = new TreeMap<>();
    
            if (parameterList!=null)
                for (String parameterName:parameterList) {
                    List<String> parameterValues = environment.getProperty("zuul.routes." + ctx.get("proxy") + ".bodyParameterInjector.parametersValues." + parameterName, List.class);
                    if (parameterValues!=null)
                        extras.put(parameterName, parameterValues.toArray(new String[]{}));
                }
    
            MyHttpServletRequestWrapper req = new MyHttpServletRequestWrapper(request, extras);
            for (Enumeration<String> e = req.getRequest().getParameterNames(); e.hasMoreElements();)
                logger.info("Parameter: [" + e.nextElement() + "]");
            ctx.setRequest(req);
            return null;
        }
    

    Basic Steps:

    1. intercept call
    2. read the list of properties to inject in the body from the application.yml
    3. create a list of extra parameters to add to the request
    4. use the method of ServletRequestWrapper to add parameters to the request
    5. log the body parameters resulting (here I see the one i've injected)
    6. rewrite the request with the modified request
    7. sniffing the call at the endpoint, I do not receive the body parameter that i've seen at the step 5

    Now I'm not sure if ctx.setRequest is the right solution to sobstitute the request and this dirty job.

    Anyone helping me?

    thanks Paolo

    opened by paolo-rendano 10
  • Add generic type and protected getters to RibbonCommand

    Add generic type and protected getters to RibbonCommand

    I have a use case in which I want to use RibbonCommand but I don't want to execute with the load balancer. Default implementation if left unchanged.

    • Changed class signature to accept a generic to indicate the type of the client. Due to type erasure this is a passive change
    • Added protected methods for member fields
    • Added protected method to delegate client execution when overridden
    opened by mattnelson 9
  • zuul-core:fix bug #802,Null pointer Exception in PassportLoggingHandler

    zuul-core:fix bug #802,Null pointer Exception in PassportLoggingHandler

    In this issue # 802, PassportLoggingHandler throws a Null pointer Exception, because com.netflix.zuul.netty.server.ClientRequestReceiver.channelRead() at the time of decoding failure, Did not call com.netflix. Zuul. Stats. Status. StatusCategoryUtils. SetStatusCategory (ctx, statusCategory) method, lead to the context is empty, the com.netflix.zuul.net ty. Insights. PassportLoggingHandler. LogPassport (channel) printed in the log, Can't get the statusCategory in Conext, so Null pointer Exception is thrown. stack trace: Error logging passport info after request completed! java.lang.NullPointerException at com.netflix.zuul.stats.status.StatusCategoryUtils.getStatusCategory(StatusCategoryUtils.java:39) at com.netflix.zuul.netty.insights.PassportLoggingHandler.logPassport(PassportLoggingHandler.java:98) at com.netflix.zuul.netty.insights.PassportLoggingHandler.userEventTriggered(PassportLoggingHandler.java:73) at io.netty.channel.AbstractChannelHandlerContext.invokeUserEventTriggered(AbstractChannelHandlerContext.java:329) at io.netty.channel.AbstractChannelHandlerContext.invokeUserEventTriggered(AbstractChannelHandlerContext.java:315) at io.netty.channel.AbstractChannelHandlerContext.fireUserEventTriggered(AbstractChannelHandlerContext.java:307) at io.netty.channel.ChannelInboundHandlerAdapter.userEventTriggered(ChannelInboundHandlerAdapter.java:108)

    The solution is to call com.netflix.zuul.stats.status.StatusCategoryUtils.setStatusCategory(ctx, statusCategory) to add the statusCategory of the context when decoding fails.Please review this pull request, thk

    opened by skyguard1 8
  • use LongAdder for filter concurrent count

    use LongAdder for filter concurrent count

    Summary

    • replace AtomicInteger with LongAdder in BaseFilter

    why use LongAdder?

    This class [LongAdder] is usually preferable to AtomicLong when
    multiple threads update a common sum that 
    is used for purposes such as collecting statistics, 
    not for fine-grained synchronization control. Under
     low update contention, the two classes have similar 
    characteristics. But under high contention,
     expected throughput of this class is significantly 
    higher, at the expense of higher space consumption.
    

    Javadoc: https://docs.oracle.com/en/java/javase/17/docs/api/java.base/java/util/concurrent/atomic/LongAdder.html

    LongAdder microbenchmark results

    https://github.com/sullis/microbenchmarks-java/blob/main/README.md

    opened by sullis 1
  • build(deps): bump perfmark-api from 0.25.0 to 0.26.0

    build(deps): bump perfmark-api from 0.25.0 to 0.26.0

    Bumps perfmark-api from 0.25.0 to 0.26.0.

    Release notes

    Sourced from perfmark-api's releases.

    Release 0.26.0

    API Changes

    • PerfMark.setEnabled() now returns if setting the value succeeded. (#181).

    Implementation Improvements

    • Added work arounds for Java 19's Virtual threads, which may not be able to use Thread Local storage. If this is the case, PerfMark attempts to emulate thread local trace buffers using a concurrent map.
    • Trace storage now more eagerly removes storage when it find the thread is gone, and is more GC friendly. PerfMark still attempts to preserve trace data after a thread finishes, but without strongly referring to it.

    Unstable API Changes

    The following changes are to unstable APIs of PerfMark. This section describes APIs for advanced users to try out new functionality before it becomes API stable.

    • Added Methods to Storage for clearing thread local and global storage (#177)
      • Storage.clearLocalStorage() enables individual threads to clear their storage
      • Storage.clearGlobalIndex() marks storage as SoftlyReachable where possible It can be used to indiciate that future calls to Storage.read() should not include data after the point that the global index was cleared. Both clearLocalStorage and clearGlobalIndex can be used to remove old trace data.
      • LocalMarkHolder was added to enter and exit critical sections of of MarkHolder mutation. The only implementation currently pulls the MarkHolder out of thread local storage for editing. However, this designed to work with other context-specific storage mechanisms, such as Kotlin's Coroutines.
    Commits
    • ace407c Bump to v0.26.0
    • 7e159b6 Minor quality of life improvements for PerfMark (#182)
    • ef41d3c api: record if PerfMark.setEnabled succeeded (#181)
    • 28cff69 impl: store generation with microsecond precision (#179)
    • bff10bc impl: more eagerly reclaim stale MarkHolders
    • f9609ec all: silence jmh warnings from errorprone
    • 10c5b8e Bump to Gradle 7.5.1
    • 808e971 all: rename Storage test methods
    • 6e5673b impl: Factor Thread Storage into pluggable provider
    • 8e74dfa all: fix warnings
    • Additional commits viewable in compare view

    Dependabot compatibility score

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
    dependencies java 
    opened by dependabot[bot] 0
  • Does Zuul2 Support Injection of an Additional Proxy before Message Reaches the Origin?

    Does Zuul2 Support Injection of an Additional Proxy before Message Reaches the Origin?

    We are currently using Zuul 2.1.9 as our gateway. I have an HttpInboundSyncFilter that I want to use to inject a proxy call between zuul and the origin service when a certain criteria is met on a request. A cURL example (from Zuul) would look like this curl --proxy http://my-proxy.com:8080 https://my-service/endpoint.

    Is this something that Zuul2 even supports? I'm not sure if Netty even supports it. I'm trying to find a good example on either but coming up short.

    opened by jhareuk 1
  • 504 FAILURE_ORIGIN_RESET_CONNECTION with high load

    504 FAILURE_ORIGIN_RESET_CONNECTION with high load

    Hi, We are using Zuul 2.1.5 in our service when in we route different url to different downstream services/servers. Now, on high load we are facing 504s with nfstatus as FAILURE_ORIGIN_RESET_CONNECTION. Below is the passport log. The issue seems to be different from what is mentioned in https://github.com/Netflix/zuul/issues/560 as I can confirm that the fix is already in the version being used. It would be very helpful if someone could guide me on what could be the issue. The downstream server is perfectly fine as requests to the same server through different route is working perfectly. Only the requests through our app is causing 504s.

    Passport logs: {"endOfBatch":true,"level":"INFO","logger":"org.apache.logging.slf4j.Log4jLogger","loggerName":"xxxx.gateway.zuul.passport.PassportLoggingHandler","msg":"State after complete. ProxyAttempted, current-server-conns = 750, current-http-reqs = 174, status = 504, nfstatus = FAILURE_ORIGIN_RESET_CONNECTION, UUID = 1066002d-5ba3-466a-ab04-33fc852d7498, req = uri=https://xxxxxxxx:443/rd/AvMVuwqjDv8S~QOBGrge~yL7NCEqOC75Mv9v~zj~PP_m.gif, method=get, clientip=x.x.x.x, passport = CurrentPassport {start_ms=1665029811559, [+0=IN_REQ_HEADERS_RECEIVED, +72977=FILTERS_INBOUND_START, +343721=FILTERS_INBOUND_END, +441856=ORIGIN_CONN_ACQUIRE_START, +443791=ORIGIN_CONN_ACQUIRE_END, +469729=OUT_REQ_HEADERS_SENDING, +645923=OUT_REQ_HEADERS_SENT, +656653=IN_REQ_LAST_CONTENT_RECEIVED, +692502=OUT_REQ_LAST_CONTENT_SENDING, +699235=OUT_REQ_LAST_CONTENT_SENT, +106847772=ORIGIN_CH_INACTIVE, +106991906=FILTERS_OUTBOUND_START, +107133904=FILTERS_OUTBOUND_END, +107162399=OUT_RESP_HEADERS_SENDING, +107184070=OUT_RESP_LAST_CONTENT_SENDING, +107277146=OUT_RESP_HEADERS_SENT, +107278579=OUT_RESP_LAST_CONTENT_SENT, +107538181=SERVER_CH_CLOSE, +107540515=SERVER_CH_CLOSE, +107599707=NOW]}","thread":"Salamander-ClientToZuulWorker-20","threadId":75,"threadPriority":5,"timestamp":"2022-10-06T04:16:51.667Z","ts":1665029811667}

    opened by tussinha 0
  • NullPointerException for some requests after 2.1.7 -> 2.1.9 upgrade

    NullPointerException for some requests after 2.1.7 -> 2.1.9 upgrade

    Hello, after we've upgraded zuul version from 2.1.7 to 2.1.9 we've started having 502s returned for some requests

    The main log message is the following:

    logger_name: c.n.z.n.s.ClientResponseWriter
    message: ClientResponseWriter caught exception in client connection pipeline: Channel: [id: 0x18e734fc, L:/10.255.0.12:8080 - R:/10.106.65.221:7742], active=true, open=true, registered=true, writable=true, id=18e734fc, Passport: CurrentPassport {start_ms=1665421851993, [+0=IN_REQ_HEADERS_RECEIVED, +25405=FILTERS_INBOUND_START, +605148=IN_REQ_LAST_CONTENT_RECEIVED, +8022162=FILTERS_INBOUND_END, +8472518=ORIGIN_CONN_ACQUIRE_START, +8474740=ORIGIN_CONN_ACQUIRE_END, +8501191=OUT_REQ_HEADERS_SENDING, +8525891=OUT_REQ_LAST_CONTENT_SENDING, +8581036=OUT_REQ_HEADERS_SENT, +8583789=OUT_REQ_LAST_CONTENT_SENT, +57036026=IN_RESP_HEADERS_RECEIVED, +57049009=FILTERS_OUTBOUND_START, +57203853=FILTERS_OUTBOUND_END, +57383477=NOW]}
    stack_trace: j.l.NullPointerException: null
    	at i.n.h.c.h.HttpUtil.isKeepAlive(HttpUtil.java:75)
    	at c.n.z.n.s.ClientResponseWriter.buildHttpResponse(ClientResponseWriter.java:199)
    	at c.n.z.n.s.ClientResponseWriter.channelRead(ClientResponseWriter.java:139)
    	at i.n.c.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
    	at i.n.c.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
    	at i.n.c.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357)
    	at c.n.z.n.f.BaseZuulFilterRunner.invokeNextStage(BaseZuulFilterRunner.java:151)
    	at c.n.z.n.f.ZuulFilterChainRunner.runFilters(ZuulFilterChainRunner.java:88)
    	at c.n.z.n.f.ZuulFilterChainRunner.filter(ZuulFilterChainRunner.java:56)
    	at c.n.z.f.e.ProxyEndpoint.filterResponse(ProxyEndpoint.java:327)
    	... 64 frames truncated
    

    It seems to code from this line because nativeReq is null. It's being retrieved from context by the CommonContextKeys.NETTY_HTTP_REQUEST key, and the only place where it's set seems to be here I'm not sure what to make out of these discoveries, unfortunately

    Also, if it helps to debug, these are other log entries for the same channel id:

    logger_name: c.n.z.n.i.PassportLoggingHandler
    message: Request processing took longer than threshold! toplevelid = 76982df9-c56a-463b-b4fb-f00db9d21c7d, Channel: [id: 0x18e734fc, L:/10.255.0.12:8080 - R:/10.106.65.221:7742], active=true, open=true, registered=true, writable=true, id=18e734fc, Passport: CurrentPassport {start_ms=1665421755721, [+0=IN_REQ_HEADERS_RECEIVED, +42807=FILTERS_INBOUND_START, +6862128=FILTERS_INBOUND_END, +6911049=ORIGIN_CONN_ACQUIRE_START, +6912093=ORIGIN_CONN_ACQUIRE_END, +6935320=OUT_REQ_HEADERS_SENDING, +7014311=OUT_REQ_HEADERS_SENT, +2879404595=IN_REQ_LAST_CONTENT_RECEIVED, +2879487068=OUT_REQ_LAST_CONTENT_SENDING, +2879529603=OUT_REQ_LAST_CONTENT_SENT, +3537669603=IN_RESP_HEADERS_RECEIVED, +3537685802=FILTERS_OUTBOUND_START, +3537838120=FILTERS_OUTBOUND_END, +3537861921=OUT_RESP_HEADERS_SENDING, +3537900333=OUT_RESP_HEADERS_SENT, +3537906991=IN_RESP_LAST_CONTENT_RECEIVED, +3537937795=OUT_RESP_LAST_CONTENT_SENDING, +3537962376=OUT_RESP_LAST_CONTENT_SENT, +3538291470=NOW]}
    

    (there are several entries like this ↑ followed by the original NPE entry)

    logger_name: c.n.z.n.s.ClientResponseWriter
    message: Received complete event while still handling the request. With reason: CLOSE -- Channel: [id: 0x18e734fc, L:/10.255.0.12:8080 - R:/10.106.65.221:7742], active=true, open=true, registered=true, writable=true, id=18e734fc, Passport: CurrentPassport {start_ms=1665421851993, [+0=IN_REQ_HEADERS_RECEIVED, +25405=FILTERS_INBOUND_START, +605148=IN_REQ_LAST_CONTENT_RECEIVED, +8022162=FILTERS_INBOUND_END, +8472518=ORIGIN_CONN_ACQUIRE_START, +8474740=ORIGIN_CONN_ACQUIRE_END, +8501191=OUT_REQ_HEADERS_SENDING, +8525891=OUT_REQ_LAST_CONTENT_SENDING, +8581036=OUT_REQ_HEADERS_SENT, +8583789=OUT_REQ_LAST_CONTENT_SENT, +57036026=IN_RESP_HEADERS_RECEIVED, +57049009=FILTERS_OUTBOUND_START, +57203853=FILTERS_OUTBOUND_END, +57568295=SERVER_CH_CLOSE, +57582248=IN_REQ_CANCELLED, +57733681=ORIGIN_CH_CLOSE, +57774299=ORIGIN_CH_CLOSE, +58059284=NOW]}
    
    logger_name: c.n.z.n.i.PassportLoggingHandler
    message: Incorrect final state! toplevelid = 6426fec6-e938-4bc4-9dd5-d1b959101cbf, Channel: [id: 0x18e734fc, L:/10.255.0.12:8080 ! R:/10.106.65.221:7742], active=false, open=false, registered=true, writable=false, id=18e734fc, Passport: CurrentPassport {start_ms=1665421851993, [+0=IN_REQ_HEADERS_RECEIVED, +25405=FILTERS_INBOUND_START, +605148=IN_REQ_LAST_CONTENT_RECEIVED, +8022162=FILTERS_INBOUND_END, +8472518=ORIGIN_CONN_ACQUIRE_START, +8474740=ORIGIN_CONN_ACQUIRE_END, +8501191=OUT_REQ_HEADERS_SENDING, +8525891=OUT_REQ_LAST_CONTENT_SENDING, +8581036=OUT_REQ_HEADERS_SENT, +8583789=OUT_REQ_LAST_CONTENT_SENT, +57036026=IN_RESP_HEADERS_RECEIVED, +57049009=FILTERS_OUTBOUND_START, +57203853=FILTERS_OUTBOUND_END, +57568295=SERVER_CH_CLOSE, +57582248=IN_REQ_CANCELLED, +57733681=ORIGIN_CH_CLOSE, +57774299=ORIGIN_CH_CLOSE, +58078446=SERVER_CH_CLOSE, +58082086=SERVER_CH_CLOSE, +58140741=NOW]}
    

    Also, all of the problem requests seems to be GraphQL ones

    opened by meetyourturik 0
Releases(v2.3.0)
Owner
Netflix, Inc.
Netflix Open Source Platform
Netflix, Inc.
a blockchain network simulator aimed at researching consensus algorithms for performance and security

Just Another Blockchain Simulator JABS - Just Another Blockchain Simulator. JABS is a blockchain network simulator aimed at researching consensus algo

null 49 Jan 1, 2023
BitTorrent library and client with DHT, magnet links, encryption and more

Bt A full-featured BitTorrent implementation in Java 8 peer exchange | magnet links | DHT | encryption | LSD | private trackers | extended protocol |

Andrei Tomashpolskiy 2.1k Jan 2, 2023
Apache Mesos is a cluster manager that provides efficient resource isolation and sharing across distributed applications, or frameworks

Apache Mesos is a cluster manager that provides efficient resource isolation and sharing across distributed applications, or frameworks. It can run Hadoop, Jenkins, Spark, Aurora, and other frameworks on a dynamically shared pool of nodes.

The Apache Software Foundation 5k Dec 31, 2022
Netflix, Inc. 23.1k Jan 5, 2023
Fault tolerance and resilience patterns for the JVM

Failsafe Failsafe is a lightweight, zero-dependency library for handling failures in Java 8+, with a concise API for handling everyday use cases and t

Jonathan Halterman 3.9k Dec 29, 2022
Fibers, Channels and Actors for the JVM

Quasar Fibers, Channels and Actors for the JVM Getting started Add the following Maven/Gradle dependencies: Feature Artifact Core (required) co.parall

Parallel Universe 4.5k Dec 25, 2022
Resilience4j is a fault tolerance library designed for Java8 and functional programming

Fault tolerance library designed for functional programming Table of Contents 1. Introduction 2. Documentation 3. Overview 4. Resilience patterns 5. S

Resilience4j 8.5k Jan 2, 2023
Build highly concurrent, distributed, and resilient message-driven applications on the JVM

Akka We believe that writing correct concurrent & distributed, resilient and elastic applications is too hard. Most of the time it's because we are us

Akka Project 12.6k Jan 3, 2023
Distributed Stream and Batch Processing

What is Jet Jet is an open-source, in-memory, distributed batch and stream processing engine. You can use it to process large volumes of real-time eve

hazelcast 1k Dec 31, 2022
Simple and lightweight sip server to create voice robots, based on vert.x

Overview Lightweight SIP application built on vert.x. It's intended to be used as addon for full-featured PBX to implement programmable voice scenario

Ivoice Technology 7 May 15, 2022
Govern Service is a lightweight, low-cost service registration, service discovery, and configuration service SDK.

Govern Service is a lightweight, low-cost service registration, service discovery, and configuration service SDK. By using Redis in the existing infrastructure (I believe you have already deployed Redis), it doesn’t need to bring extra to the operation and maintenance deployment. Cost and burden. With the high performance of Redis, Govern Service provides ultra-high TPS&QPS (10W+/s JMH Benchmark).

Ahoo Wang 61 Nov 22, 2022
DataFX - is a JavaFX frameworks that provides additional features to create MVC based applications in JavaFX by providing routing and a context for CDI.

What you’ve stumbled upon here is a project that intends to make retrieving, massaging, populating, viewing, and editing data in JavaFX UI controls ea

Guigarage 110 Dec 29, 2022
CoSky is a lightweight, low-cost service registration, service discovery, and configuration service SDK.

High-performance, low-cost microservice governance platform. Service Discovery and Configuration Service

Ahoo Wang 61 Nov 22, 2022
Tools for keeping your cloud operating in top form. Chaos Monkey is a resiliency tool that helps applications tolerate random instance failures.

PROJECT STATUS: RETIRED The Simian Army project is no longer actively maintained. Some of the Simian Army functionality has been moved to other Netfli

Netflix, Inc. 7.9k Jan 6, 2023
Spring MSA api gateway & service discovery with consul & Jaeger & Cassandra

Spring-Cloud-MSA 준비 Cassandra 서버를 준비한다 table.sql 파일로 keyspace와 테이블을 만들어 둔다 Consul 1.11.1버전 기준 https://www.consul.io/downloads 에서 1.11.1 버전 운영체제 맞게 다운

INSUNG CHOI 2 Nov 22, 2022
🔥 强大的动态线程池,并附带监控报警功能(没有依赖中间件),完全遵循阿里巴巴编码规范。Powerful dynamic thread pool, does not rely on any middleware, with monitoring and alarm function.

?? 动态线程池(DTP)系统,包含 Server 端及 SpringBoot Client 端需引入的 Starter. 这个项目做什么? 动态线程池(Dynamic-ThreadPool),下面简称 DTP 系统 美团线程池文章 介绍中,因为业务对线程池参数没有合理配置,触发过几起生产事故,进而

longtai 3.4k Dec 30, 2022
The Apache Software Foundation 605 Dec 30, 2022
🔥 强大的动态线程池,附带监控线程池功能(没有依赖任何中间件)。Powerful dynamic thread pool, does not rely on any middleware, with monitoring thread pool function.

ThreadPool, so easy. 动态线程池监控,主意来源于美团技术公众号 点击查看美团线程池文章 看了文章后深受感触,再加上最近线上线程池的不可控以及不可逆等问题,想做出一个兼容性、功能性、易上手等特性集于一身的的开源项目。目标还是要有的,虽然过程可能会艰辛 目前这个项目是由作者独立开发,

龙台 3.4k Jan 3, 2023