A high performance caching library for Java

Overview

Build Status Coverage Status Maven Central JavaDoc License Stack Overflow

Caffeine is a high performance, near optimal caching library. For more details, see our user's guide and browse the API docs for the latest release.

Cache

Caffeine provides an in-memory cache using a Google Guava inspired API. The improvements draw on our experience designing Guava's cache and ConcurrentLinkedHashMap.

LoadingCache<Key, Graph> graphs = Caffeine.newBuilder()
    .maximumSize(10_000)
    .expireAfterWrite(5, TimeUnit.MINUTES)
    .refreshAfterWrite(1, TimeUnit.MINUTES)
    .build(key -> createExpensiveGraph(key));

Features at a Glance

Caffeine provides flexible construction to create a cache with a combination of the following features:

In addition, Caffeine offers the following extensions:

Use Caffeine in a community provided integration:

Powering infrastructure near you:

  • Dropwizard: Ops-friendly, high-performance, RESTful APIs
  • Cassandra: Manage massive amounts of data, fast
  • Accumulo: A sorted, distributed key/value store
  • HBase: A distributed, scalable, big data store
  • Apache Solr: Blazingly fast enterprise search
  • Infinispan: Distributed in-memory data grid
  • Redisson: Ultra-fast in-memory data grid
  • OpenWhisk: Serverless cloud platform
  • Corfu: A cluster consistency platform
  • Grails: Groovy-based web framework
  • Finagle: Extensible RPC system
  • Neo4j: Graphs for Everyone
  • Druid: Real-time analytics

In the News

Download

Download from Maven Central or depend via Gradle:

implementation 'com.github.ben-manes.caffeine:caffeine:3.0.0'

// Optional extensions
implementation 'com.github.ben-manes.caffeine:guava:3.0.0'
implementation 'com.github.ben-manes.caffeine:jcache:3.0.0'

See the release notes for details of the changes.

Snapshots of the development version are available in Sonatype's snapshots repository.

Comments
  • Deadlock in NonReentrantLock

    Deadlock in NonReentrantLock

    Under heavy load of JVM, we are experiencing deadlocks in caffeine version 2.5.6.

    Here is the threaddump of the two threads in deadlock:

    "ajp-0.0.0.0-8109-59 (P6V7aMaZbbXkfaYzYzXxID0tvHylFQJumanAAABmA)":
            at sun.misc.Unsafe.park(Native Method)
            - parking to wait for  <0x000000032a600bb8> (a com.github.benmanes.caffeine.cache.NonReentrantLock$Sync)
            at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
            at java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836)
            at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireQueued(AbstractQueuedSynchronizer.java:870)
            at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquire(AbstractQueuedSynchronizer.java:1199)
            at com.github.benmanes.caffeine.cache.NonReentrantLock$Sync.lock(NonReentrantLock.java:315)
            at com.github.benmanes.caffeine.cache.NonReentrantLock.lock(NonReentrantLock.java:78)
            at com.github.benmanes.caffeine.cache.BoundedLocalCache.performCleanUp(BoundedLocalCache.java:1096)
            at com.github.benmanes.caffeine.cache.BoundedLocalCache.afterWrite(BoundedLocalCache.java:1017)
            at com.github.benmanes.caffeine.cache.BoundedLocalCache.put(BoundedLocalCache.java:1655)
            at com.github.benmanes.caffeine.cache.BoundedLocalCache.put(BoundedLocalCache.java:1602)
            at com.github.benmanes.caffeine.cache.LocalManualCache.put(LocalManualCache.java:64)
    

    and

    "ajp-0.0.0.0-8109-78 (-jIkyghhBePEsLll9i5dnGr65Dx8PfahGe2gAABxE)":
            at sun.misc.Unsafe.park(Native Method)
            - parking to wait for  <0x000000032a600bb8> (a com.github.benmanes.caffeine.cache.NonReentrantLock$Sync)
            at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
            at java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836)
            at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireQueued(AbstractQueuedSynchronizer.java:870)
            at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquire(AbstractQueuedSynchronizer.java:1199)
            at com.github.benmanes.caffeine.cache.NonReentrantLock$Sync.lock(NonReentrantLock.java:315)
            at com.github.benmanes.caffeine.cache.NonReentrantLock.lock(NonReentrantLock.java:78)
            at com.github.benmanes.caffeine.cache.BoundedLocalCache.performCleanUp(BoundedLocalCache.java:1096)
            at com.github.benmanes.caffeine.cache.BoundedLocalCache.afterWrite(BoundedLocalCache.java:1017)
            at com.github.benmanes.caffeine.cache.BoundedLocalCache.put(BoundedLocalCache.java:1655)
            at com.github.benmanes.caffeine.cache.BoundedLocalCache.put(BoundedLocalCache.java:1602)
            at com.github.benmanes.caffeine.cache.LocalManualCache.put(LocalManualCache.java:64)
    

    Looking at the code, I can't imagine a scenario this could occur, but it does occur under heavy load of our JVMs.

    Any ideas what we could do? For now, as a workaround, I've added a synchronized() block around the cache.put() methods to make sure only 1 thread is adding new values to the cache at a time.

    opened by agoston 104
  • LIRS Eviction Algorithm

    LIRS Eviction Algorithm

    I noticed that concurrentlinkedhashmap originally planned on having LIRS eviction for v2.x according to the google code page [1].

    Is this still planned for caffeine the successor to concurrentlinkedhashmap ?

    I also am quite happy to see you have custom weighting for entries :+1:

    [1] https://code.google.com/p/concurrentlinkedhashmap/

    opened by wburns 67
  • Very high false positive rate observed for BloomFilter implementation.

    Very high false positive rate observed for BloomFilter implementation.

    I observed a very high false positive rate with the current implementation of BloomFilter, at times as high as 100%. My test code and results are given below. I also found out what can be changed to fix it, although not completely sure why my fix worked. I thought maybe bitmask method's output has some bias, but upon further inspection it seems alright. I hope I am using the APIs correctly and don't have some other bug in my test code. Would be great if someone can review and let me know. tks! P.S.: I understand that BloomFilter code might be internal to caffeine, but just want to highlight my observation.

    Line: https://github.com/ben-manes/caffeine/blob/master/simulator/src/main/java/com/github/benmanes/caffeine/cache/simulator/admission/bloom/BloomFilter.java#L166

    Current code:

    static long bitmask(int hash) {
        return 1L << ((hash >>> 8) & INDEX_MASK);
      }
    

    | Number of Insertions | False positives(% ) | True positives | | --- | --- | --- | | 1024 | 27 (2.636719%) | 1024 | | 4096 | 640 (15.625000%) | 4096 | | 16384 | 15213 (92.852783%) | 16384 | | 65536 | 65536 (100.000000%) | 65536 | | 262144 | 262144 (100.000000%) | 262144 | | 1048576 | 1048576 (100.000000%) | 1048576 |

    New implementation:

    static long bitmask(int hash) {
        return 1L << ((hash >>> 24) & INDEX_MASK);
      }
    

    | Number of Insertions | False positives(%) | True positives | | --- | --- | --- | | 1024 | 15 (1.464844%) | 1024 | | 4096 | 96 (2.343750%) | 4096 | | 16384 | 391 (2.386475%) | 16384 | | 65536 | 1598 (2.438354%) | 65536 | | 262144 | 6326 (2.413177%) | 262144 | | 1048576 | 25600 (2.441406%) | 1048576 |

    Test method:

        public void bloomFilterTest() {
            System.out.println("Number of Insertions\tFalse positives(%)\tTrue positives");
            for (int capacity = 2 << 10; capacity < 2 << 22; capacity = capacity << 2) {
                long[] input = new Random().longs(capacity).distinct().toArray();
                BloomFilter bf = new BloomFilter(input.length / 2, new Random().nextInt());
                int truePositives = 0;
                int falsePositives = 0;
                int i = 0;
                // Add only first half of input array to bloom filter
                for (; i < (input.length / 2); i++) {
                    bf.put(input[i]);
                }
                // First half should be part of the bloom filter
                for (int k = 0; k < i; k++) {
                    truePositives += bf.mightContain(input[k]) ? 1 : 0;
                }
                // Second half shouldn't be part of the bloom filter
                for (; i < input.length; i++) {
                    falsePositives += bf.mightContain(input[i]) ? 1 : 0;
                }
                System.out.format("%d\t\t%d(%f%%)\t\t%d\n",
                    input.length / 2, falsePositives,
                    ((float) falsePositives / (input.length / 2)) * 100, truePositives);
            }
        }
    
    opened by ashish0x90 57
  • putIfAbsent() regression?

    putIfAbsent() regression?

    We recently upgraded from 2.7.0 to 2.8.8 and noticed that our putIfAbsent code-paths are contending more heavily than before (at least that's our observation):

       java.lang.Thread.State: BLOCKED (on object monitor)
    	at com.github.benmanes.caffeine.cache.BoundedLocalCache.put(BoundedLocalCache.java:2030)
    	- waiting to lock <0x00007f00ba087828> (a com.github.benmanes.caffeine.cache.PSWMS)
    	at com.github.benmanes.caffeine.cache.BoundedLocalCache.putIfAbsent(BoundedLocalCache.java:1965)
    

    We are aware of the commit https://github.com/ben-manes/caffeine/commit/614fe6053e343481c15c55f1d696517d73baae0d that should have improved putIfAbsent but we are somehow seeing the opposite. This is a bounded (1M) synchronous cache with a 1-day expiration.

    opened by panghy 43
  • Question - too much time spent on scheduleDrainBuffers for reading operations

    Question - too much time spent on scheduleDrainBuffers for reading operations

    I'm using Modeshape JCR implementation, and it internally uses caffeine cache configured on the following way: Caffeine.newBuilder().maximumSize(workspaceCacheSize).executor(Runnable::run).build(); I noticed that for each read operation is trying to perform some cleaning up operations, represented on this method: scheduleDrainBuffers(). It internally performs the operation on the current thread. The problem that I see is that when Modeshape calls many times to read the cache, and each scheduleDrainBuffers is taking in one example about 128 ms, which seems to be high. Why is this operation required when reading the cache? I attached some images showing the time spent on executing clean up tasks: executionpath executiontime

    I appreciate your help.

    Thanks

    opened by alejo-17 43
  • cache performing worse than LRU

    cache performing worse than LRU

    Hi @ben-manes,

    I have created a trace from a production system where I see Caffeine underperforming a simple LRU strategy (with a cache size of 100k items, Caffeine hit rate is 72.35% vs 75.91% with LRU): scarab-recs.trace.gz

    The format is a series of longs (can be decoded with DataInputStream#readLong)

    Any thoughts if this could be improved upon?

    Thanks, Viktor

    opened by phraktle 42
  • Remove explicit nullable annotations if they depend on the generic type argument

    Remove explicit nullable annotations if they depend on the generic type argument

    Hi Ben,

    I was pleasantly surprised by CheckerFramework annotations on the caffeine code, since we also switched a while back, it makes life easier. :+1:

    A pattern I saw and I wanted to check your opinion on is for example Cache::get:

      @Nullable
      V get(@NonNull K key, @NonNull Function<? super K, ? extends V> mappingFunction);
    

    Now reading the documentation, if the mapping function never returns null, neither should the get function, right?

    If that is so, than I think the @Nullable should be removed from the return type. Since CF will just propagate the nullability of the V type parameter. If V is @Nullable, then the result will be, but if you provide a mapping function that has a @NonNull return type, it's a bit strange to still have to handle the null case.

    I hope this question makes sense?

    opened by DavyLandman 41
  • Feature Request : Add a listener for whenever cache is updated by load/loadAll

    Feature Request : Add a listener for whenever cache is updated by load/loadAll

    Hi,

    I have a use-case where I need to maintain several caches that basically store the same value, but have different keys. So, I want to update both caches whenever i actually get the data for one of them, and so want to be notified whenever a load was called. It's basically similar to the removalListener. Is there any particular reason why it's not support for updates and only removal.

    Thanks.

    opened by apragya 38
  • Cache Workers getting stuck?

    Cache Workers getting stuck?

    I saw this problem in version 2.3.1, via VisualVm, when after a few hours that a bunch of ForkJoinWorkers would be stuck on the WriteBuffer.poll() function in RUNNABLE state. Even hit an OOM when the cpu utilization became too high and my events started to get backed up. I recently upgraded to 2.3.3 and thought the problem was fixed until the below screenshots.

    When I started up the application (10/19, 7:34 am) there is one FJW in the writebuffer.poll, which is fine. screen shot 2016-10-19 at 7 34 41 am

    This morning (10/20, 9:34am) and this is in constant RUNNABLE state. The number of workers that will be in this stuck runnable state will slowly grow over time too because I have seen this behavior happen with 2.3.1. screen shot 2016-10-20 at 9 34 02 am

    The cpu utilization and cpu load metrics in Grafana support an overall increase work as if the workers are busy waiting/spinlooping or something.

    There are maybe 1k or so instances of the cache data structure in the application. This server has the most instances of the cache and the issue appears to manifest faster than the other servers. All of the servers have a 64GB heap.

    I will have to switch back to Guava considering this is a production application.

    Thanks for your effort though!

    opened by ghost 37
  • Questions - Cache clean up without using a maintenance thread

    Questions - Cache clean up without using a maintenance thread

    Hi @ben-manes,

    I could not find a better place to ask the following, therefore I am opening the current issue.

    My question is regarding a maintenance clean up approach that I am thinking to implement. This approach does not make use of a maintenance thread.

    I want to make use of CacheWriter and cache's notification mechanism by acting on RemovalCause.EXPIRED events to perform a maintenance clean up. Consider the following interface:

    Implementations of this interface would construct instances of the Cache<String, T> by invoking the build(long). At runtime, when a cache entry expires and the delete handler is invoked, a reference to the respective cache instance is obtained by invoking nativeCache() and then calling cleanUp() on it to clean up. The idea here is to call cleanUp() after N entries have expired.

    Do you see any issues with this approach?

    public interface MyCache<T> {
        
        default Cache<String, T> build(final long expireAfterWriteSeconds) {
    
            final Cache<String, T> nativeCache = Caffeine.newBuilder()
                    .expireAfterWrite(expireAfterWriteSeconds, TimeUnit.SECONDS)
                    .writer(new CacheWriter<String, T>() {
    
                        private static final int CACHE_CLEANUP_THRESHOLD = <SOME_VALUE>; 
                        private final AtomicInteger expiredEntryCounter = new AtomicInteger(0);
    
                        @Override
                        public void write(String key, T value) {
                            //NO-OP
                        }
    
                        @Override
                        public void delete(final String key, final T value, final RemovalCause removalCause) {
                            if (removalCause == RemovalCause.EXPIRED) {
                                if (expiredEntryCounter.incrementAndGet() == CACHE_CLEANUP_THRESHOLD) {
                                    nativeCache().cleanUp();
                                    expiredEntryCounter.set(0);
                                }
                            }
                        }
                    })
                    .build();
    
            return nativeCache;
        }
    
        Cache<String, T> nativeCache();
    }
    
    opened by azagniotov 32
  • Safety of recursive Get operation

    Safety of recursive Get operation

    I'm a newbie to threadsafety. Let's say i have a Cache<K, List> where V is a recursive key of the cache. Users can request all children of a key K. The children will have their sub-children retrieved, and the sub-children have their sub-sub-children retrieved until no more nodes can be located. This works adequately with a loading function which, upon not having a value for K, will request a database for only the immediate children, and stays within the cache function's expected atomic operation. However, i want to reduce the amount of queries to the database, since if a key is missing, potentially a query must be run for every single child and their descendants. I'd like to perform a modified get function that has the power to update multiple keys at once, but that breaks the expected contract of the get operation. Will it still function safely?

    opened by superyuuki 31
  • Variable refresh

    Variable refresh

    Currently a fixed duration in refreshAfterWrite is supported. An evaluation per entry, similar to expireAfter(Expiry), would provide a more flexible approach. In #261 this would evaluate on create, read, and update. In #498, it was also requested to evaluate on the exceptional case, where the entry remains unchanged.

    Implementation notes

    A quirk in Guava (and replicated in Caffeine for compatibility) is that expireAfterWrite and expireAfterAccess may both be set and honored. As refreshAfterWrite piggybacks on the write timestamp, we would have to extend the codegen to 3 timestamps. That isn't desirable or useful, so instead we should restrict this specific combination and reuse the 2 timestamp variations already generated (e.g. for expireAfter and refreshAfterWrite). Setting both fixed and variable refresh should be disallowed. This approach only requires retrofitting methods onto existing generated cache entries to map to the correct timestamp field.

    This issue consolidates #261, #272, #360, and #498. @denghongcai @eugenemiretsky, @anuraaga, @musiKk, @minwoox, @ikhoon, @barkanido, @wcpan

    opened by ben-manes 15
  • Bulk refresh

    Bulk refresh

    This Guava issue identified an expected optimization not being implemented. A getAll where some of the entries should be refreshed due to refreshAfterWrite schedules each key as an independent asynchronous operation. Due to the provided CacheLoader supporting bulk loads, it is reasonable to expect that the refresh is performed as a single batch operation.

    This optimization may be invasive and deals with complex interactions for both the synchronous and asynchronous cache implementations. Or, it could be as simple as using getAllPresent and enhancing it to support batch read post-processing.

    opened by ben-manes 22
Releases(v3.1.2)
  • v3.1.2(Nov 23, 2022)

    Cache

    • Added detection for when a key's equality has changed and corrupted the underlying map (SOLR-16489)
    • Improved the frequency sketch by better utilizing the cpu cache line to reduce memory accesses
    • Fixed computeIfAbsent when replacing a collected weak/soft value and the custom expiry fails
    • Improved refresh conflict detection to avoid unnecessarily discarding after a reload
    • Improved eviction when the weight is oversized (#745)

    Guava

    • Added an adapter from Guava's CacheLoader to Caffeine's (#766)

    JCache

    • Fixed Cache.getConfiguration() to return an immutable instance
    Source code(tar.gz)
    Source code(zip)
  • v3.1.1(May 26, 2022)

  • v3.1.0(Apr 28, 2022)

    • Fixed the publication of a removal notification when computing a null value on top of an expired entry
    • Fixed the publication of a removal notification for a conditional replacement on an unbounded cache
    • Fixed Map.equals when the traversal triggers an eviction and the subset of live entries matches
    • Improved refreshAfterWrite to return the new value if computed by the caller (#688, #699)
    • Added Interner for weak keyed equality caching (#344)
    Source code(tar.gz)
    Source code(zip)
  • v3.0.6(Mar 19, 2022)

    • Fixed AsyncCache.getAll when storing additional mappings (#655)
    • Added the ability to specify the expiration time with the computation
    • Added a warning if writes stall due to blocked eviction (#672)
    • Added advanced query support for obtaining entry metadata
    Source code(tar.gz)
    Source code(zip)
  • v3.0.5(Dec 3, 2021)

    Cache

    • Fixed reference eviction when used with a broken executor (JDK-8274349)
    • Suppressed log warnings if a future is cancelled or times out (#597)
    • Removed @Nullable from LoadingCache.get(key) (#594)
    • Fixed early expiration of in-flight async loads (#625)

    JCache

    • close() will now shutdown the executor and wait for in-flight loads to finish
    Source code(tar.gz)
    Source code(zip)
  • v2.9.3(Dec 3, 2021)

    Cache

    • Fixed reference eviction when used with a broken executor (JDK-8274349)
    • Reduced the entry overhead by 8 bytes when using weak or soft values
    • Suppressed log warnings if a future is cancelled or times out (#597)
    • Fixed Map.entrySet.contains(o) to use reference equality
    • Fixed early expiration of in-flight async loads (#625)

    JCache

    • close() will now shutdown the executor and wait for in-flight loads to finish
    Source code(tar.gz)
    Source code(zip)
  • v3.0.4(Sep 13, 2021)

    Cache

    • Fixed cases that incorrectly notified the removal listener for no-op replacements (#593)
    • Improved how refreshAfterWrite is triggered on a read to avoid hotspots
    • Added the ability to capture coldest & hottest weighted snapshots
    • Reduced the per-entry overhead when using weak/soft values
    • Fixed Map.entrySet.contains(o) to use reference equality
    Source code(tar.gz)
    Source code(zip)
  • v3.0.3(Jul 2, 2021)

    Cache

    • Fixed reading an intermittent null weak/soft value during a concurrent write (#568)
    • Fixed extraneous eviction when concurrently removing a collected entry after a writer resurrects it with a new mapping (#568)
    • Fixed excessive retries of discarding an expired entry when the fixed duration period is extended, thereby resurrecting it (#568)
    Source code(tar.gz)
    Source code(zip)
  • v2.9.2(Jul 2, 2021)

    Cache

    • Fixed reading an intermittent null weak/soft value during a concurrent write (#568)
    • Fixed extraneous eviction when concurrently removing a collected entry after a writer resurrects it with a new mapping (#568)
    • Fixed excessive retries of discarding an expired entry when the fixed duration period is extended, thereby resurrecting it (#568)
    Source code(tar.gz)
    Source code(zip)
  • v3.0.2(May 3, 2021)

    Cache

    • Added cancellation of the next scheduled expiration cleanup when the cache becomes empty (#542)
    • Improved how variable expiration reorganizes the timer events (#541)
    • Improved usage of nullness annotations (#530)
    • Removed sun.misc.Unsafe and fallbacks
    • Added module descriptors (#535)
    Source code(tar.gz)
    Source code(zip)
  • v2.9.1(May 3, 2021)

    Cache

    • Added cancellation of the next scheduled expiration cleanup when the cache becomes empty (#542)
    • Improved how variable expiration reorganizes the timer events (#541)
    • Added putIfAbsent optimistic fastpath (#506)
    Source code(tar.gz)
    Source code(zip)
  • v3.0.1(Mar 18, 2021)

    • Fixed thread local fallback initialization for striped buffer (#515)
    • Improved eviction reordering for weighted caches (#513)
    • Added putIfAbsent optimistic fastpath (#506)
    Source code(tar.gz)
    Source code(zip)
  • v3.0.0(Feb 21, 2021)

    This release includes API incompatible changes.

    Highlights

    • Java 11 or above is required
    • Java 8 users can continue to use version 2.x, which will be supported

    API improvements

    • Added Policy.refreshes() for a snapshot of the in-flight refresh operations
    • CacheLoader and AsyncCacheLoader offer bulk factory methods
    • AsyncCacheLoader methods may now throw checked exceptions
    • Better usage of Checker Framework nullness annotations (#337)
    • LoadingCache.refresh now returns the in-flight future (#143)
    • Various unimplemented default methods are now abstract
    • Added LoadingCache.refreshAll convenience method
    • Bulk loads now receive a Set of keys (was Iterable)
    • More flexible generic bounds and type parameters

    Implementation improvements

    • Refresh operations ignore redundant calls during an in-flight load and are linearizable (#193, #236, #282, #322, #373, #467)
    • The Java Platform Logging API is used instead of java.util.logging (#456)
    • sun.misc.Unsafe is no longer required (#273)

    Incompatible changes

    • VarExpiration time-based puts now return the old value instead of a boolean result
    • Removed jandex resource as no longer utilized by Quarkus
    • Split Policy.Expiration into fixed and refresh interfaces

    Deprecation removals

    • CacheWriter, SingleConsumerQueue, and UnsafeAccess
    • StatsCounter.recordEviction variations
    • CacheStats constructors

    Notes

    • CacheWriter usages can be replaced by Map computations and Caffeine.evictionListener
    • For best performance Unsafe may be used if available, otherwise falls back to VarHandles
    • We will continue to support and maintain version 2.x for Java 8 users
    Source code(tar.gz)
    Source code(zip)
  • v2.9.0(Feb 16, 2021)

    Cache

    • Added Caffeine.evictionListener which is notified within the atomic operation when an entry is automatically removed
    • Added triggering cache maintenance if an iterator observes an expired entry for more aggressive eviction (#487)
    • Improved eager eviction of an added or updated entry if it exceeds the cache's maximum weight
    • Deprecated CacheWriter. Please use asMap computations or an eviction listener instead
    • Added CacheStats.of(...) to allow for becoming a value-based class in a future release
    Source code(tar.gz)
    Source code(zip)
  • v2.8.8(Dec 8, 2020)

  • v2.8.7(Dec 7, 2020)

    Cache

    • Fixed asMap().keySet().toArray() to not return expired mappings (#472)
    • Added support for ISO-8601 durations to CaffeineSpec (#466)
    • Fixed put update optimization for variable expiration (#478)
    Source code(tar.gz)
    Source code(zip)
  • v2.8.6(Oct 12, 2020)

    Cache

    • Changed false sharing protection to comply with JDK 15's field layout (Java Objects Inside Out)
    • Suppressed the removal listener notification when an AsyncCache future value resolves to null
    • Improved the implementations of AsyncCache.synchronous().asMap() conditional methods
    • Added Jandex index for assisting GraalVM AOT (https://github.com/quarkusio/quarkus/issues/10420)
    • Deprecated UnsafeAccess and SingleConsumerQueue

    JCache

    • Changed to an OSGi Component to avoid coupling consumers to the provider (#447)
    • Added the ability to record native statistics (#460)
    Source code(tar.gz)
    Source code(zip)
  • v2.8.5(Jun 29, 2020)

  • v2.8.4(May 21, 2020)

  • v2.8.3(May 18, 2020)

  • v2.8.2(Apr 27, 2020)

    Cache

    • Added optimistic fast path for putIfAbsent to avoid locking (https://github.com/apache/openwhisk/pull/2797)
    • Fixed race causing an incorrect removal cause (#412)
    • Fixed SCM connection URLs (#394)

    JCache

    • Prefer the thread context classloader (#387)
    Source code(tar.gz)
    Source code(zip)
  • v2.8.1(Jan 15, 2020)

  • v2.8.0(Aug 6, 2019)

    Cache

    • Included the license file in the jar (#325)
    • Added RemovalCause to StatsCounter (#304)
    • Added getAll support to manual caches (#310)
    • Fixed long overflow in statistics (https://github.com/google/guava/issues/3503)
    • Added Scheduler for prompt eviction of expired entries (#195)

    JCache

    • Fixed assigning ticker to cache builder (#313)
    Source code(tar.gz)
    Source code(zip)
  • v2.7.0(Feb 24, 2019)

    Cache

    • Added async asMap() view (#156)
    • Introduced AsyncCache for manual async cache (#246)
    • Fixed async expiration when create races with reads (#298)
    • Improved hit rates by using an adaptive eviction policy (#106)
    • Fixed refresh to use the stats ticker for recording the load time (#240)
    • Rescheduled async maintenance immediately if pending work remains (#225)
    • Migrated from JSR-305 annotations to CheckerFramework & ErrorProne (#242)

    JCache

    • Added config file setting for the executor (#276)

    This release includes improvements to the eviction policy by using a hill climber to optimize for frequency or recency. For more details, see the HighScalability article and our paper Adaptive Software Cache Management.

    Source code(tar.gz)
    Source code(zip)
  • v2.6.2(Feb 22, 2018)

    Cache

    • Changed the default initialCapacity to match ConcurrentHashMap's from 0 to 16 (#218)
    • Fixed variable expiration's duration calculation overflowing due to timestamp race (#217)
    • Avoiding method handles due to memory leak caused by JDK-8174749 (#222)
    • Promote using java.time.Duration instead of long, TimeUnit pair (#221)
    • Improved Guava compatibility for bulk get iteration order (#220)
    Source code(tar.gz)
    Source code(zip)
  • v2.6.1(Dec 28, 2017)

    Cache

    • Fixed null value being propagated to callbacks on null result of a CompletableFuture (#206)
    • Improved emulation of synchronous computations in AsyncLoadingCache asMap() view
    • Added Automatic-Module-Name manifest entry for Java 9 modularity (#211)
    • Significantly reduced the jar size due to code generation bloat (#110)
    • Fixed futures not expiring due to stale read of the time (#212)

    JCache

    • Fixed Cache invoke() not notifying the writer when the entry was loaded and modified (#210)
    • Upgraded to specification version 1.1.0

    ACM's Transaction on Storage has published our paper on TinyLFU! To download the paper legally without the paywall, please use the authorizer link in the project's readme.

    Source code(tar.gz)
    Source code(zip)
  • v2.6.0(Nov 1, 2017)

    Cache

    • Added put methods to Policy.VarExpiration that specify the entry's expiration time (#163)
    • Fixed early expiration due to long computations and a stale read of the time (#191)

    JCache

    • Fixed cache not being created from the external configuration properly (#194)
    • Passes 1.1 preview TCK except for backwards incompatible 1.0 TCK tests
    Source code(tar.gz)
    Source code(zip)
  • 2.5.6(Sep 23, 2017)

  • v2.5.5(Aug 16, 2017)

  • v2.5.4(Aug 4, 2017)

Owner
Ben Manes
Ben Manes
High Performance data structures and utility methods for Java

Agrona Agrona provides a library of data structures and utility methods that are a common need when building high-performance applications in Java. Ma

Real Logic 2.5k Jan 5, 2023
High performance Java implementation of a Cuckoo filter - Apache Licensed

Cuckoo Filter For Java This library offers a similar interface to Guava's Bloom filters. In most cases it can be used interchangeably and has addition

Mark Gunlogson 161 Dec 30, 2022
High Performance Primitive Collections for Java

HPPC: High Performance Primitive Collections Collections of primitive types (maps, sets, stacks, lists) with open internals and an API twist (no java.

Carrot Search 890 Dec 28, 2022
Simple Binary Encoding (SBE) - High Performance Message Codec

Simple Binary Encoding (SBE) SBE is an OSI layer 6 presentation for encoding and decoding binary application messages for low-latency financial applic

Real Logic 2.8k Dec 28, 2022
Table-Computing (Simplified as TC) is a distributed light weighted, high performance and low latency stream processing and data analysis framework. Milliseconds latency and 10+ times faster than Flink for complicated use cases.

Table-Computing Welcome to the Table-Computing GitHub. Table-Computing (Simplified as TC) is a distributed light weighted, high performance and low la

Alibaba 34 Oct 14, 2022
A fork of Cliff Click's High Scale Library. Improved with bug fixes and a real build system.

High Scale Lib This is Boundary's fork of Cliff Click's high scale lib. We will be maintaining this fork with bug fixes, improvements and versioned bu

BMC TrueSight Pulse (formerly Boundary) 402 Jan 2, 2023
A Primitive Collection library that reduces memory usage and improves performance

Primitive-Collections This is a Simple Primitive Collections Library i started as a hobby Project. It is based on Java's Collection Library and FastUt

Speiger 26 Dec 25, 2022
Java Collections till the last breadcrumb of memory and performance

Koloboke A family of projects around collections in Java (so far). The Koloboke Collections API A carefully designed extension of the Java Collections

Roman Leventov 967 Nov 14, 2022
Replicate your Key Value Store across your network, with consistency, persistance and performance.

Chronicle Map Version Overview Chronicle Map is a super-fast, in-memory, non-blocking, key-value store, designed for low-latency, and/or multi-process

Chronicle Software : Open Source 2.5k Dec 29, 2022
A Java library for quickly and efficiently parsing and writing UUIDs

fast-uuid fast-uuid is a Java library for quickly and efficiently parsing and writing UUIDs. It yields the most dramatic performance gains when compar

Jon Chambers 142 Jan 1, 2023
Java port of a concurrent trie hash map implementation from the Scala collections library

About This is a Java port of a concurrent trie hash map implementation from the Scala collections library. It is almost a line-by-line conversion from

null 147 Oct 31, 2022
Java library for the HyperLogLog algorithm

java-hll A Java implementation of HyperLogLog whose goal is to be storage-compatible with other similar offerings from Aggregate Knowledge. NOTE: This

Aggregate Knowledge (a Neustar service) 296 Dec 30, 2022
A simple integer compression library in Java

JavaFastPFOR: A simple integer compression library in Java License This code is released under the Apache License Version 2.0 http://www.apache.org/li

Daniel Lemire 487 Dec 30, 2022
LWJGL is a Java library that enables cross-platform access to popular native APIs useful in the development of graphics (OpenGL, Vulkan), audio (OpenAL), parallel computing (OpenCL, CUDA) and XR (OpenVR, LibOVR) applications.

LWJGL - Lightweight Java Game Library 3 LWJGL (https://www.lwjgl.org) is a Java library that enables cross-platform access to popular native APIs usef

Lightweight Java Game Library 4k Dec 29, 2022
A modern I/O library for Android, Kotlin, and Java.

Okio See the project website for documentation and APIs. Okio is a library that complements java.io and java.nio to make it much easier to access, sto

Square 8.2k Dec 31, 2022
A Persistent Java Collections Library

PCollections A Persistent Java Collections Library Overview PCollections serves as a persistent and immutable analogue of the Java Collections Framewo

harold cooper 708 Dec 28, 2022
RxJava – Reactive Extensions for the JVM – a library for composing asynchronous and event-based programs using observable sequences for the Java VM.

RxJava: Reactive Extensions for the JVM RxJava is a Java VM implementation of Reactive Extensions: a library for composing asynchronous and event-base

ReactiveX 46.7k Dec 30, 2022
Jalgorithm is an open-source Java library which has implemented various algorithms and data structure

We loved Java and algorithms, so We made Jalgorithm ❤ Jalgorithm is an open-source Java library which has implemented various algorithms and data stru

Muhammad Karbalaee 35 Dec 15, 2022
Library for creating In-memory circular buffers that use direct ByteBuffers to minimize GC overhead

Overview This project aims at creating a simple efficient building block for "Big Data" libraries, applications and frameworks; thing that can be used

Tatu Saloranta 132 Jul 28, 2022