High Performance Inter-Thread Messaging Library

Overview
Comments
  • Dead lock observed in BlockingWaitStrategy in Log 4J

    Dead lock observed in BlockingWaitStrategy in Log 4J

    We have seen this behavior in during high load. Where Logging Got Stropped and Application Went to not responsive state. log4J Version : 2.2 Disruptor Version : 3.3.2 Ring Buffer Size : 128 Producer(Multiples Threads) and Consumer Threads(Single Thread As per Log 4J Configuration) Started Waiting on each other.

    Here is the one of the Trace from Thread Dump:

    Producer : "[ACTIVE] ExecuteThread: '7' for queue: 'weblogic.kernel.Default (self-tuning)'" TIMED_WAITING sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:349) com.lmax.disruptor.MultiProducerSequencer.next(MultiProducerSequencer.java:136) com.lmax.disruptor.MultiProducerSequencer.next(MultiProducerSequencer.java:105) com.lmax.disruptor.RingBuffer.publishEvent(RingBuffer.java:444) com.lmax.disruptor.dsl.Disruptor.publishEvent(Disruptor.java:256) org.apache.logging.log4j.core.async.AsyncLogger.logMessage(AsyncLogger.java:285) org.apache.logging.log4j.spi.AbstractLogger.logMessage(AbstractLogger.java:722) org.apache.logging.log4j.spi.AbstractLogger.logIfEnabled(AbstractLogger.java:693) org.apache.logging.log4j.jcl.Log4jLog.debug(Log4jLog.java:81)

    Consumer Thread :

    "AsyncLogger-1" waiting for lock java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject@5d972983 WAITING sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:186) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2043) com.lmax.disruptor.BlockingWaitStrategy.waitFor(BlockingWaitStrategy.java:45) com.lmax.disruptor.ProcessingSequenceBarrier.waitFor(ProcessingSequenceBarrier.java:55) com.lmax.disruptor.BatchEventProcessor.run(BatchEventProcessor.java:123) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) java.lang.Thread.run(Thread.java:744)

    Is this is known issue which got already fixed in recent build ?

    Before raising this issue in Log4J thought of checking here since this dead lock related to LMAX implementation.

    Can you please take a look into it ?, it seems to be critical for our application. Let me know if any other details required ?. Your Quick Support is much appreciated.

    Code Snippet for BlockingWaitStrategy :

    public final class BlockingWaitStrategy implements WaitStrategy
    {
        private final Lock lock = new ReentrantLock();
        private final Condition processorNotifyCondition = lock.newCondition();
    
        @Override
        public long waitFor(long sequence, Sequence cursorSequence, Sequence dependentSequence, SequenceBarrier barrier)
            throws AlertException, InterruptedException
        {
            long availableSequence;
            if ((availableSequence = cursorSequence.get()) < sequence)
            {
               lock.lock();
                try
                {
                    while ((availableSequence = cursorSequence.get()) < sequence)
                    {
                        barrier.checkAlert();
                        processorNotifyCondition.await();
                    }
                }
                finally
                {
                    lock.unlock();
                }
            }
    
            while ((availableSequence = dependentSequence.get()) < sequence)
            {
                barrier.checkAlert();
            }
    
            return availableSequence;
        }
    
        @Override
        public void signalAllWhenBlocking()
        {
            lock.lock();
            try
            {
                processorNotifyCondition.signalAll();
            }
            finally
            {
                lock.unlock();
            }
        }
    }
    

    Regards, Sakumar

    opened by SampathK 47
  • Using disruptor as an pipeliner.

    Using disruptor as an pipeliner.

    Hello, I'd like to have a ringbuffer like structure that does the following job,

    That is, to use a ringbuffer to merge a bunch of sequenced records (1, 2, 3 ...) into one sequence.

    opened by amosbird 32
  • Is it legal to produce events from the background consumer thread using ProducerType.MULTI?

    Is it legal to produce events from the background consumer thread using ProducerType.MULTI?

    I'm trying to debug an issue where the queue appears to be full, but my background thread is in runnable state spinning on at com.lmax.disruptor.BatchEventProcessor.processEvents(BatchEventProcessor.java:190) without making any progress. I'm curious if there's a memory safety issue when the consumer thread adds elements to the queue, but I don't see this mentioned in the documentation.

    The issue is odd because the background thread never appears to do a great deal of processing, and the queue grows slowly over time as through the state has become lost.

    Any information or ideas are greatly appreciated, thanks!

    opened by carterkozak 24
  • Extract RingBuffer interface refactoring

    Extract RingBuffer interface refactoring

    It would be great to extract RingBuffer interface from com.lmax.disruptor.RingBuffer class for simplifying mocking RingBuffer during unit testing. RingBuffer class is declared as final currently and can't be directly mocked with Mockito framework. We use additional "wrapper" class over RingBuffer to workaround it at the moment (using PowerMock is not option for us)

    opened by asandrigailo 24
  • Batch publication on ring buffer (Take2)

    Batch publication on ring buffer (Take2)

    Hi Mike,

    I've added batchSize and batchStartsAt parameters to handle the buffer size problem and to cope with the senario where your buffer is larger than you want to submit at any one time. i.e. your incoming buffer is 10 elements and your ring only has 8 slots. (A problem a tripped over in one of our test scenarios).

    opened by SamBarker 20
  • What is going to happen, when disruptor.shutdown called?

    What is going to happen, when disruptor.shutdown called?

    If muti threads producers are blocking when disruptor is full, and on comsuming. At this moment, I call shutdown(), the blocking thread will break blocking? Return with nothing or exception?

    opened by wu-sheng 19
  • 100% CPU when using SleepingWaitStrategy on 32 bit Linux

    100% CPU when using SleepingWaitStrategy on 32 bit Linux

    Environment

    Guest: Debian 7.11 kernel 3.2.0-4-486. Host: Mac OS X x64. Virtualization: VirtualBox. Oracle JDK 1.8.0_92. Disruptor 3.3.4.

    Description

    Check sample program. It seems that (un)famous LockSupport.parkNanos(1L) behaves differently on 64 and 32 bit kernels. On 32 bit Linux sleeping wait strategy behaves almost like busy spin.

    opened by TanyaGaleyev 19
  • CPU at 100% when using BlockingWaitStrategy

    CPU at 100% when using BlockingWaitStrategy

    Hi,

    We are using the Disruptor to send events to various parts of our application, when our server starts sometimes the CPU spikes up and stays at 100%, however in other times the CPU sits idle at 3 - 5% (which is normal). I would normally see this as something in the application timing or otherwise that would cause this issue. However I have dug a little deeper into this and run some monitoring tools like jvmtop (https://github.com/patric-r/jvmtop) and it looks like my WorkerPool threads (10 threads) are spinning despite using the BlockingWaitStrategy. Could this be somewhat related to https://github.com/LMAX-Exchange/disruptor/issues/149?

    Environment: Java: 1.7.0_71 Disruptor: 3.3.4

    Below are jstack traces for my threads (only including stacktrace for 1 thread in each usecase for brevity, however all threads show the exact stacktrace):

    Spinning thread causing 100% CPU "Outgoing Worker-thread-9" prio=10 tid=0x00007f2d8d65e800 nid=0x64fd runnable [0x00007f2d7a4c2000] java.lang.Thread.State: RUNNABLE at com.lmax.disruptor.ProcessingSequenceBarrier.waitFor(ProcessingSequenceBarrier.java:56) at com.lmax.disruptor.WorkProcessor.run(WorkProcessor.java:144) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:745)

    Idle thread causing 3-5% CPU "Outgoing Worker-thread-10" prio=10 tid=0x00007f441d667800 nid=0x7887 waiting on condition [0x00007f4408866000] java.lang.Thread.State: WAITING (parking) at sun.misc.Unsafe.park(Native Method) - parking to wait for <0x00000007030168c8> (a java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject) at java.util.concurrent.locks.LockSupport.park(LockSupport.java:186) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2043) at com.lmax.disruptor.BlockingWaitStrategy.waitFor(BlockingWaitStrategy.java:45) at com.lmax.disruptor.ProcessingSequenceBarrier.waitFor(ProcessingSequenceBarrier.java:56) at com.lmax.disruptor.WorkProcessor.run(WorkProcessor.java:144) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:745)

    I setup the RingBuffer / WorkerPool using the following code:

    `this.executorService = Executors.newFixedThreadPool(numberOfOutgoingWorkers, threadFactory);

        this.dispatchEventHandlerWorkers = new DispatchEventHandler[numberOfOutgoingWorkers];
        for (int i = 0; i < numberOfOutgoingWorkers; i++) {
            this.dispatchEventHandlerWorkers[i] = new DispatchEventHandler();
        }
    
        this.ringBuffer = RingBuffer.createMultiProducer(DispatchItemHolder.EVENT_FACTORY, dispatchQueueCapacity, new BlockingWaitStrategy());
        SequenceBarrier barrier = this.ringBuffer.newBarrier();
    
        this.workerPool = new WorkerPool<>(this.ringBuffer, barrier, new Slf4jIgnoreExceptionHandler(), this.dispatchEventHandlerWorkers);
        this.workerPool.start(this.executorService);`
    

    The RingBuffer itself is still fully operational. It is still able to both accept events, and dispatch them via the WorkerPool. It is just that my WorkerPool threads are spinning. Is there anything within Disruptor itself that could cause the WorkerPool threads to not honour the WaitStrategy?

    opened by dingwa 17
  • Concurrency safe with multiple producers and consumers

    Concurrency safe with multiple producers and consumers

    Describe the bug When I use multi-producer and multi-consumer, the accumulator does not always achieve what I expect. But the next method of MultiProducerSequencer is thread safe. I don't know the cause of the problem, can you help me?

    To Reproduce

    
    import java.util.concurrent.CountDownLatch;
    import java.util.concurrent.TimeUnit;
    import java.util.concurrent.atomic.AtomicLong;
    import java.util.stream.Stream;
    
    import com.lmax.disruptor.EventFactory;
    import com.lmax.disruptor.ExceptionHandler;
    import com.lmax.disruptor.RingBuffer;
    import com.lmax.disruptor.SleepingWaitStrategy;
    import com.lmax.disruptor.TimeoutException;
    import com.lmax.disruptor.WorkHandler;
    import com.lmax.disruptor.dsl.Disruptor;
    import com.lmax.disruptor.dsl.ProducerType;
    import org.junit.After;
    import org.junit.Before;
    import org.junit.Test;
    import utils.NamedProducerThreadFactory;
    
    public class DisruptorConcurrencySafeTests {
    
    	private static final int RING_BUFFER_SIZE = 2 << 16;
    	private static final int DEFAULT_CORE_NUM = 2;
    	private static final int LOOP_COUNT = 1000_0000;
    
    	static class ConcurrencySafeBean {
    
    		private String identifier;
    		private Long value;
    
    		ConcurrencySafeBean() {
    
    		}
    
    		static ConcurrencySafeBean createDefault() {
    			return new ConcurrencySafeBean();
    		}
    
    		public String getIdentifier() {
    			return identifier;
    		}
    
    		public void setIdentifier(String identifier) {
    			this.identifier = identifier;
    		}
    
    		public Long getValue() {
    			return value;
    		}
    
    		public void setValue(Long value) {
    			this.value = value;
    		}
    	}
    
    	static class ConcurrencySafeBeanFactory implements EventFactory<ConcurrencySafeBean> {
    
    		@Override
    		public ConcurrencySafeBean newInstance() {
    			return ConcurrencySafeBean.createDefault();
    		}
    	}
    
    	static class ConcurrencySafeBeanWorkHandler implements WorkHandler<ConcurrencySafeBean> {
    
    		private final AtomicLong accumulator;
    
    		public ConcurrencySafeBeanWorkHandler(AtomicLong accumulator) {
    			this.accumulator = accumulator;
    		}
    
    		@Override
    		public void onEvent(ConcurrencySafeBean event) throws Exception {
    			accumulator.addAndGet(event.getValue());
    		}
    	}
    
    	static class DefaultExceptionHandler implements ExceptionHandler<ConcurrencySafeBean> {
    
    		@Override
    		public void handleEventException(Throwable ex, long sequence, ConcurrencySafeBean event) {
    			System.err.println("[HandleEventException] the exception cause: "
    					+ ex.getMessage() + ", sequence: " + sequence);
    		}
    
    		@Override
    		public void handleOnStartException(Throwable ex) {
    			System.err.println("[HandleOnStartException] the exception cause: " + ex.getMessage());
    		}
    
    		@Override
    		public void handleOnShutdownException(Throwable ex) {
    			System.err.println("[HandleOnShutdownException] the exception cause: " + ex.getMessage());
    		}
    	}
    
    	private Disruptor<ConcurrencySafeBean> safeDisruptor;
    	private Disruptor<ConcurrencySafeBean> unsafeDisruptor;
    	private RingBuffer<ConcurrencySafeBean> safeRingBuffer;
    	private RingBuffer<ConcurrencySafeBean> unsafeRingBuffer;
    
    	private final AtomicLong safeAccumulator = new AtomicLong(0L);
    	private final AtomicLong unsafeAccumulator = new AtomicLong(0L);
    
    	@Before
    	public void initialize() {
    		initSafeDisruptor();
    		initUnsafeDisruptor();
    	}
    
    	@After
    	public void destroyed() {
    		try {
    			safeDisruptor.shutdown(10, TimeUnit.SECONDS);
    			unsafeDisruptor.shutdown(10, TimeUnit.SECONDS);
    		}
    		catch (TimeoutException e) {
    			throw new RuntimeException(e);
    		}
    	}
    
    	@Test
    	public void testSingleProducerAndMultiConsumersCases() {
    		final CountDownLatch shutdownLatch = new CountDownLatch(DEFAULT_CORE_NUM);
    		ThreadGroup threadGroup = new ThreadGroup("Unsafe-Concurrency-ThreadGroup");
    		Thread[] producerThreads = createProducerThreads(
    				threadGroup,
    				"Unsafe-Consumer-Thread-",
    				() -> {
    					for (int i = 0; i < LOOP_COUNT / DEFAULT_CORE_NUM; i++) {
    						long sequence = unsafeRingBuffer.next(1);
    						try {
    							ConcurrencySafeBean csb = unsafeRingBuffer.get(sequence);
    							csb.setIdentifier("Unsafe concurrency bean");
    							csb.setValue(1L);
    						}
    						finally {
    							unsafeRingBuffer.publish(sequence);
    						}
    					}
    					shutdownLatch.countDown();
    				},
    				DEFAULT_CORE_NUM
    		);
    		Stream.of(producerThreads)
    			.forEach(Thread::start);
    
    		try {
    			shutdownLatch.await();
    		}
    		catch (InterruptedException e) {
    			// NOOP
    		}
    
    		System.out.println("[Unsafe Concurrency] accumulator: " + unsafeAccumulator.get());
    
    		threadGroup.interrupt();
    	}
    
    	@Test
    	public void testMultiProducersAndMultiConsumersCases() {
    		final CountDownLatch shutdownLatch = new CountDownLatch(DEFAULT_CORE_NUM);
    		ThreadGroup threadGroup = new ThreadGroup("Safe-Concurrency-ThreadGroup");
    		Thread[] producerThreads = createProducerThreads(
    				threadGroup,
    				"Safe-Consumer-Thread-",
    				() -> {
    					for (int i = 0; i < LOOP_COUNT / DEFAULT_CORE_NUM; i++) {
    						long sequence = safeRingBuffer.next(1);
    						try {
    							ConcurrencySafeBean csb = safeRingBuffer.get(sequence);
    							csb.setIdentifier("Safe concurrency bean");
    							csb.setValue(1L);
    						}
    						finally {
    							safeRingBuffer.publish(sequence);
    						}
    					}
    					shutdownLatch.countDown();
    				},
    				DEFAULT_CORE_NUM
    		);
    		Stream.of(producerThreads)
    				.forEach(Thread::start);
    
    		try {
    			shutdownLatch.await();
    		}
    		catch (InterruptedException e) {
    			// NOOP
    		}
    
    		System.out.println("[Safe Concurrency] accumulator: " + safeAccumulator.get());
    
    		threadGroup.interrupt();
    	}
    
    	private void initSafeDisruptor() {
    		this.safeDisruptor = new Disruptor<>(
    				new ConcurrencySafeBeanFactory(),
    				RING_BUFFER_SIZE,
    				new NamedProducerThreadFactory("safe-disruptor-processor-", ""),
    				ProducerType.MULTI,
    				new SleepingWaitStrategy()
    		);
    
    		WorkHandler<ConcurrencySafeBean>[] workHandlers = createWorkHandler(DEFAULT_CORE_NUM, safeAccumulator);
    		this.safeDisruptor.handleEventsWithWorkerPool(workHandlers);
    		this.safeDisruptor.setDefaultExceptionHandler(new DefaultExceptionHandler());
    		this.safeDisruptor.start();
    
    		this.safeRingBuffer = safeDisruptor.getRingBuffer();
    	}
    
    	private void initUnsafeDisruptor() {
    		this.unsafeDisruptor = new Disruptor<>(
    				new ConcurrencySafeBeanFactory(),
    				RING_BUFFER_SIZE,
    				new NamedProducerThreadFactory("safe-disruptor-processor-", ""),
    				ProducerType.SINGLE,
    				new SleepingWaitStrategy()
    		);
    
    		WorkHandler<ConcurrencySafeBean>[] workHandlers = createWorkHandler(DEFAULT_CORE_NUM, unsafeAccumulator);
    		this.unsafeDisruptor.handleEventsWithWorkerPool(workHandlers);
    		this.unsafeDisruptor.setDefaultExceptionHandler(new DefaultExceptionHandler());
    		this.unsafeDisruptor.start();
    
    		this.unsafeRingBuffer = unsafeDisruptor.getRingBuffer();
    	}
    
    	protected WorkHandler<ConcurrencySafeBean>[] createWorkHandler(
    			final int coreNum, final AtomicLong atomicAccumulator) {
    		WorkHandler<ConcurrencySafeBean>[] workHandlers = new ConcurrencySafeBeanWorkHandler[coreNum];
    		for (int accumulator = 0; accumulator < coreNum; accumulator++) {
    			workHandlers[accumulator] = new ConcurrencySafeBeanWorkHandler(atomicAccumulator);
    		}
    		return workHandlers;
    	}
    
    	protected Thread[] createProducerThreads(
    			final ThreadGroup group,
    			final String threadName,
    			final Runnable runnable,
    			final int coreNum
    	) {
    		int threadNumber = 0;
    		Thread[] threads = new Thread[coreNum];
    		for (int i = 0; i < coreNum; i++) {
    			threads[i] = new Thread(
    					group,
    					runnable,
    					threadName + threadNumber
    			);
    			threadNumber ++;
    		}
    		return threads;
    	}
    
    }
    
    

    Expected behavior In the case of multiple producers and consumers, the expected value of safeAccumulator should be 10000000, but not always.

    Desktop (please complete the following information):

    • OS: Windows
    • Version: 3.4.4
    • JVM Version: Oracle JDK 1.8

    Additional context I found the problem of sequence repetition in the mode of single producer and multi-consumer, and found the problem by switching to the mode of multi-producer and multi-consumer. The sample code is a verification of the problem occurring on the line. This sample code was not tested on the Linux platform.

    opened by Veryfirefly 15
  • SleepingWaitStrategy causing 100% CPU with one consumer

    SleepingWaitStrategy causing 100% CPU with one consumer

    I am using opencensus, a metrics and tracing library that uses the disruptor internally. When my application is idle I experience 100% CPU which I have traced to the single disruptor consumer thread.

    I am experiencing this on both my Ubuntu box and in the cloud. Ubuntu box: Ubuntu 17.10 Linux version 4.13.0-17-generic (buildd@lcy01-amd64-011) (gcc version 7.2.0 (Ubuntu 7.2.0-8ubuntu3)) #20-Ubuntu SMP Mon Nov 6 10:04:08 UTC 2017 java version "1.8.0_161" Java(TM) SE Runtime Environment (build 1.8.0_161-b12) Java HotSpot(TM) 64-Bit Server VM (build 25.161-b12, mixed mode)

    Cloud os: https://cloud.google.com/container-optimized-os/docs/

    I have created a minimal project that reproduces the issue: https://github.com/matthewrj/opencensus-bug/tree/master

    Forcing the Opencensus library to use BlockingWaitStrategy by class path patching makes the CPU usage go to 0%.

    opened by matthewrj 12
  • Nasty bug (probably JVM bug)

    Nasty bug (probably JVM bug)

    Hello!

    Recently got strange bug, somehow related to LMAX-Disruptor. We managed to create reproducable snapshot: https://github.com/sergey-ushakov/lmax-disruptor-bug

    NPE happens in place where it can't happen. Bug disappears in next cases:

    • JVM started with debugger
    • code changes (see comments in the code):
    • run JVM with -Xint
    • replace publish lambda with anonymous class
    • start test at JDK1.9ea

    Bug appears with magic number of processed operations ~110600

    Increasing ring buffer at disruptor lowers probality of reproduction. With ring buffer 4 or 8 bug reproduced always.

    Probably bug related to JIT

    opened by sergey-ushakov 12
  • `Util.log2` gets stuck in endless loop for negative values

    `Util.log2` gets stuck in endless loop for negative values

    Describe the bug The method com.lmax.disruptor.util.Util.log2 gets stuck in an endless loop for negative values. The reason for this is that no argument validation is performed and the signed right shift operator >> is used which prevents negative values from ever becoming 0.

    I mainly reported this because Util.log2 seems to be part of the public API.

    Unrelated side notes:

    • For some reason the local variable has type long even though the parameter has type int, not sure if that is intentional: https://github.com/LMAX-Exchange/disruptor/blob/c8dcf814f74afa0936b197489c0e9cabce036f94/src/main/java/com/lmax/disruptor/util/Util.java#L99
    • The Util also has a probably undesired default constructor; might be worth to add an explicit private constructor.

    To Reproduce

    Util.log2(-1);
    

    Expected behavior Either some default value should be returned or an exception should be thrown.

    Desktop (please complete the following information): does not matter

    bug 
    opened by Marcono1234 0
  • Rewindable event handler separation

    Rewindable event handler separation

    This should give us type-safe rewindable and non-rewindable EventHandler implementations.

    Non-rewindable implementations will not be able to throw RewindableException but will be able to implement EventHandler::setSequenceCallback and the opposite for rewindable implementations.

    This will fix #437

    bug 
    opened by Palmr 3
  • `EventHandler::setSequenceCallback` will not play well with `RewindableException` throwing event handlers

    `EventHandler::setSequenceCallback` will not play well with `RewindableException` throwing event handlers

    Since #364 we have the concept of EventHandler::onEvent being able to throw a RewindableException which the BatchEventProcessor will handle and reset the sequence to the start of batch before trying to process the events again.

    With the existing concept of EventHandler::setSequenceCallback an EventHandler can get a reference to the sequence and move it on of its own accord. Common examples for wanting to do this would be limiting the batch size (See com.lmax.disruptor.examples.EarlyReleaseHandler) or updating the sequence after flushing data to IO so downstream consumers can be unlocked.

    If an EventHandler could throw RewindableException after it had updated the sequence by using EventHandler::setSequenceCallback then things would not go well.

    Expected behaviour We should make the option to use EventHandler::setSequenceCallback or throw RewindableException mutually exclusive. The BatchEventProcessor should not catch and handle RewindableException if the EventHandler implements EventHandler::setSequenceCallback.

    bug 
    opened by Palmr 0
  • Questions about usage scenarios

    Questions about usage scenarios

    Hello, in the process of investigating the framework, I have a question: should some time-consuming operations not be directly related to the framework, such as EventHandler processing logic should not be involved in processing some database interactions and long process processing? Is the so-called super-high concurrency only for the data distribution of the data structure itself? What is the most appropriate scenario for us to use this framework in general?

    opened by rockit-ba 0
Releases(4.0.0.RC1)
Takin is an Java-based, open-source system designed to measure online or test environmental performance test for full-links, Especially for microservices

Takin is an Java-based, open-source system designed to measure online environmental performance test for full-links, Especially for microservices. Through Takin, middlewares and applications can identify real online traffic and test traffic, ensure that they enter the right databases.

ShulieTech 1.2k Dec 21, 2022
Microserver is a Java 8 native, zero configuration, standards based, battle hardened library to run Java Rest Microservices via a standard Java main class. Supporting pure Microservice or Micro-monolith styles.

Microserver is a Java 8 native, zero configuration, standards based, battle hardened library to run Java Rest Microservices via a standard Java main class. Supporting pure Microservice or Micro-monolith styles.

AOL 936 Dec 19, 2022
Library which allows the use and rendering of Blockbench models and animations in a Minecraft server by using generated resource packs and armorstands

Hephaestus Engine Hephaestus Engine is a library which allows the visualization of block bench models and animations in a Minecraft server by the use

Unnamed Team 109 Dec 21, 2022
High Performance Inter-Thread Messaging Library

LMAX Disruptor A High Performance Inter-Thread Messaging Library Maintainer LMAX Development Team Support Open a ticket in GitHub issue tracker Google

LMAX Group 15.5k Jan 9, 2023
🔥 强大的动态线程池,附带监控线程池功能(没有依赖任何中间件)。Powerful dynamic thread pool, does not rely on any middleware, with monitoring thread pool function.

ThreadPool, so easy. 动态线程池监控,主意来源于美团技术公众号 点击查看美团线程池文章 看了文章后深受感触,再加上最近线上线程池的不可控以及不可逆等问题,想做出一个兼容性、功能性、易上手等特性集于一身的的开源项目。目标还是要有的,虽然过程可能会艰辛 目前这个项目是由作者独立开发,

龙台 3.4k Jan 3, 2023
🔥 强大的动态线程池,附带监控线程池功能(没有依赖任何中间件)。Powerful dynamic thread pool, does not rely on any middleware, with monitoring thread pool function.

?? 动态线程池系统,包含 Server 端及 SpringBoot Client 端需引入的 Starter. 动态线程池监控,主意来源于美团技术公众号 点击查看美团线程池文章 看了文章后深受感触,再加上最近线上线程池的不可控以及不可逆等问题,想做出一个 兼容性、功能性、易上手等特性 集于一身的的

龙台 3.4k Jan 3, 2023
A high available,high performance distributed messaging system.

#新闻 MetaQ 1.4.6.2发布。更新日志 MetaQ 1.4.6.1发布。更新日志 MetaQ 1.4.5.1发布。更新日志 MetaQ 1.4.5发布。更新日志 Meta-ruby 0.1 released: a ruby client for metaq. SOURCE #介绍 Meta

dennis zhuang 1.3k Dec 12, 2022
HornetQ is an open source project to build a multi-protocol, embeddable, very high performance, clustered, asynchronous messaging system.

HornetQ If you need information about the HornetQ project please go to http://community.jboss.org/wiki/HornetQ http://www.jboss.org/hornetq/ This file

HornetQ 245 Dec 3, 2022
Ribbon is a Inter Process Communication (remote procedure calls) library with built in software load balancers. The primary usage model involves REST calls with various serialization scheme support.

Ribbon Ribbon is a client side IPC library that is battle-tested in cloud. It provides the following features Load balancing Fault tolerance Multiple

Netflix, Inc. 4.4k Jan 1, 2023
Ribbon is a Inter Process Communication (remote procedure calls) library with built in software load balancers. The primary usage model involves REST calls with various serialization scheme support.

Ribbon Ribbon is a client side IPC library that is battle-tested in cloud. It provides the following features Load balancing Fault tolerance Multiple

Netflix, Inc. 4.4k Jan 4, 2023
🔥 强大的动态线程池,并附带监控报警功能(没有依赖中间件),完全遵循阿里巴巴编码规范。Powerful dynamic thread pool, does not rely on any middleware, with monitoring and alarm function.

?? 动态线程池(DTP)系统,包含 Server 端及 SpringBoot Client 端需引入的 Starter. 这个项目做什么? 动态线程池(Dynamic-ThreadPool),下面简称 DTP 系统 美团线程池文章 介绍中,因为业务对线程池参数没有合理配置,触发过几起生产事故,进而

longtai 3.4k Dec 30, 2022
Provides many useful CRUD, Pagination, Sorting operations with Thread-safe Singleton support through the native JDBC API.

BangMapleJDBCRepository Inspired by the JpaRepository of Spring framework which also provides many capabilities for the CRUD, Pagination and Sorting o

Ngô Nguyên Bằng 5 Apr 7, 2022
Add a Validate Thread Group button to the JMeter toolbar.

?? Validate Thread Group button Add a Validate Thread Group(s) button to the JMeter toolbar. ?? Why do we need this? Often when we debug our JMeter te

NaveenKumar Namachivayam ⚡ 5 Nov 16, 2022
JStackFX, a tool for analyzing thread dumps.

Context It is not an easy task to analyse thread dumps as files generated by the jstack tool provides raw text files.

Thierry Wasylczenko 84 Jan 4, 2022
A lightweight messaging library that simplifies the development and usage of RabbitMQ with the AMQP protocol.

kryo-messaging This library contains a simple MessagingService which simplifies the setup and work with RabbitMQ and the AMQP protocol. Usage Gradle r

Kryonite Labs 3 Jan 10, 2022
This project is a simple messaging application made using React-Native framework, Gifted-Chat library and Firebase database

This project is a simple messaging application made using React-Native framework, Gifted-Chat library and Firebase database. The example that will be shown here focuses on the ability of two people to message each other in a chat room.

null 3 Jan 30, 2022
A high performance caching library for Java

Caffeine is a high performance, near optimal caching library. For more details, see our user's guide and browse the API docs for the latest release. C

Ben Manes 13k Jan 3, 2023
High performance JVM JSON library

DSL-JSON library Fastest JVM (Java/Android/Scala/Kotlin) JSON library with advanced compile-time databinding support. Compatible with DSL Platform. Ja

New Generation Software Ltd 835 Jan 2, 2023
A high performance caching library for Java

Caffeine is a high performance, near optimal caching library. For more details, see our user's guide and browse the API docs for the latest release. C

Ben Manes 13k Jan 5, 2023