Replicate your Key Value Store across your network, with consistency, persistance and performance.

Overview

Chronicle Map

docs\images\Map line

Version

badge

javadoc

Overview

Chronicle Map is a super-fast, in-memory, non-blocking, key-value store, designed for low-latency, and/or multi-process applications such as trading and financial market applications. See Features doc for more information.

The size of a Chronicle Map is not limited by memory (RAM), but rather by the available disk capacity.

docs\images\CM Overview

Use cases

Chronicle Map is used in production around the world for:

  • real-time trading systems. Chronicle Map provides in-memory access speeds, and supports ultra-low garbage collection. Chronicle Map can support the most demanding of applications.

  • highly concurrent systems. Chronicle Map supports multiple readers and writers, distributed across multiple machines.

Why use Chronicle Map?

Chronicle Map is:

  • fast. Millions of operations per second, with low and stable microsecond latencies for reads and writes. Write queries scale well up to the number of hardware execution threads in the server. Read queries never block each other.

  • reliable. Chronicle Software have a “chaos monkey” test which verifies Chronicle Map multi-master replication in the face of node and network failures. The map can optionally be persisted to disk.

  • in production at banks and hedge funds, globally.

  • built using lessons learnt from real-world experience solving real-world problems.

  • open source (standard version), and in use at hundreds of sites around the world.

Our offering

Chronicle Software provides full support for Chronicle Map, consulting to help you make best use of the product, and can also deliver projects using a mix of our resources and your own.

Replication Environment Example

The following diagram shows an example of Chronicle Map replication over three servers (or sites). Chronicle Map Replication is part of Chronicle Map (Enterprise Edition); a commercially supported version of our successful open source Chronicle Map.

docs\images\Configure Three Way Replication

Replication is multi-master, lock-free, redundant, deterministic, and eventually consistent.

The writer can optionally wait for replication to occur across nodes or regions.

Note
See Chronicle Map Replication for more information.

Documentation

The Chronicle Map documentation comprises:

Table 1. Documentation

Document

Purpose

CM_Features

Features description.

CM_Replication

Replication explanation.

CM_Tutorial

Tutorial.

CM_FAQs

Frequently asked questions.

CM_Download

Downloading the software.

CM_Updates

Updates from Chronicle Map version 2.

CM_Compatibility_and_Versioning

Compatibility and Versioning description.

Linked documentation is contained in the docs folder.

Comments
  • Stateless client - batch and/or async

    Stateless client - batch and/or async

    I am investigating Chronicle Map as a potential replacement of Redis - with concurrency in mind. In our architecture we would be looking to replace a "large" Redis instance that currently has multiple clients connecting to it causing latency pileups due to Redis' blocking nature.

    The issue is that we need to make requests in random batches of ~1000. With Redis we are able to make a single request via a Lua script (or multi-get / multi-set commands) and receive a single response. In the documentation on Chronicle Maps stateless client I see that the remote calls are blocking and can be made only one key at a time, so for us the solution is not obvious.

    While I am considering passing off each individual key task to a threadpool running X blocking threads at a time, I wonder if there might be a better solution that could take advantage of doing RPC in batches and perhaps work asynchronously. As I do not see this available currently, my questions are whether this is an enhancement you might consider or if you could perhaps point me to if/how we could write our own solution for doing this - which we'd be open to contributing back...

    enhancement 2.x 
    opened by dmk23 32
  • ChronicleSet is not thread safe

    ChronicleSet is not thread safe

    Hello,

    I am running some tests for ChronicleSet of strings that is supposed to be used concurrent by 5 threads and with the following code I am getting "net.openhft.chronicle.hash.locks.IllegalInterProcessLockStateException: Must not acquire update lock, while read lock is already held by this thread"

    Here the code snippet:

    public class TestChronicleSet {

    public static <H extends ChronicleHash<K,?>,  B extends ChronicleHashBuilder<K, H, B>, K> H init(B builder, int entrySize, int averageKeySize, String fileName) {
    
        String tmp = System.getProperty("java.io.tmpdir");
        String pathname = tmp + "/" + fileName;
        File file = new File(pathname);
        try {
            H result = builder.entries(entrySize).averageKeySize(averageKeySize).createPersistedTo(file);
            return result;
        } catch (IOException ioe) {
            throw new RuntimeException(ioe);
        }
    }
    
    public synchronized static <A> ChronicleSet<A> initSet(Class<A> entryClass, int entrySize, int averageKeySize, String fileName) {
        return init(ChronicleSetBuilder.of(entryClass), entrySize, averageKeySize, fileName);
    }
    
    public static void main(String[] args) {
        ChronicleSet<String> set = initSet(String.class, 1_000_000, 30, "stringSet.dat");
        ExecutorService executor = Executors.newFixedThreadPool(5);
        for (int i = 0; i < 10; i++) {
            Runnable worker = new WorkerThread(set);
            executor.execute(worker);
        }
        executor.shutdown();
        while (!executor.isTerminated()) {
    
        }
        System.out.println("Finished all threads");
    
    
    }
    
    public static class WorkerThread implements Runnable {
        private final ChronicleSet<String> set;
        public WorkerThread(ChronicleSet<String> set){
            this.set = set;
        }
    
        @Override
        public void run() {
            System.out.println(Thread.currentThread().getName()+" Start. Command = " + set.size());
            processCommand();
            System.out.println(Thread.currentThread().getName()+" End.");
        }
    
        private void processCommand() {
            Set<String> nomenclatures = new HashSet<>();
            for(int i = 0; i < 10; i++) {
                String nomenclature = "#############################" + i;
                nomenclatures.add(nomenclature);
    
            }
    
            set.addAll(nomenclatures);
    
    
            Set<String> strings = CollectionUtil.randomSubset(nomenclatures, new Random());
    
            set.addAll(strings);
    
    
            Set<String> toRemove = new HashSet<>();
            Random generator = new Random();
            for (int j = 0; j < 3; j++) {
                int i = generator.nextInt(10);
                String nomenclature = "#############################" + i;
                toRemove.add(nomenclature);
            }
            set.removeAll(toRemove);
    
            for (String s : set) {
                System.out.println(s);
            }
    
            strings = CollectionUtil.randomSubset(nomenclatures, new Random());
    
            set.addAll(strings);
    
            for (String s : set) {
                System.out.println(s);
            }
        }
    
    }
    

    }

    Since ChronicleSet is almost the same as ChronicleMap I expect that it will be thread safe. Can you please have a look at the example code and let me know whether it is a bug into ChronicleSet implementation or I am supposed to synchronize the code for addAll/removeAll operations on the application side?

    Thanks, Radoslav

    opened by rsmilyanov 23
  • MapEntryStages.innerDefaultReplaceValue()

    MapEntryStages.innerDefaultReplaceValue()

    Regarding this commit: https://github.com/OpenHFT/Chronicle-Map/commit/79c7701a52058361150f7d2e7139cbd0d4211a9f, @RobAustin there was a reason why the code was written the way it was - it's that during relocation a different value alignment might be required, therefore a different number of chunks might be needed.

    opened by leventov 20
  • Documentation refers to ChronicleMapBuilder#entrySize,keySize but missing on Object

    Documentation refers to ChronicleMapBuilder#entrySize,keySize but missing on Object

    I'm using 2.1.10 and trying to change the entrySize appropriately but apparently this was removed at some point. Documenation still refers to this function (https://github.com/OpenHFT/Chronicle-Map#dont-forget-to-set-the-entrysize) and the exception that I receive also states this as a solution.

    java.lang.IllegalStateException: We try to figure out size of objects in serialized form, but it exceeds 16777216 bytes. We assume this is a error and throw exception at this point. If you really want larger keys/values, use ChronicleMapBuilder.keySize(int)/valueSize(int)/entrySize(int) configurations

    What is the recommended way to set the entry size on current releases?

    opened by psg9999 20
  • Runtime compatibility with Java 9

    Runtime compatibility with Java 9

    Chronicle-Map is currently not working with upcoming JRE 9. So far, I've identified following problems (tested with 9+179):

    • ByteBufferDataAccess is using sun.nio.ch.DirectBuffer which won't be accessible anymore. As MappedByteBuffer is the only ByteBuffer which is a DirectBuffer, this should be easy to resolve by changing getData to:
    public Data<ByteBuffer> getData(@NotNull ByteBuffer instance) {
        bb = instance;
        if (instance instanceof MappedByteBuffer) {
            nativeBytesStore.init(instance, false);
            bytesStore = nativeBytesStore;
        } else {
            heapBytesStore.init(instance);
            bytesStore = heapBytesStore;
        }
        return this;
    }
    
    • VanillaChronicleHash is using sun.misc.Cleaner which won't be available anymore.
    enhancement 
    opened by mstrap 17
  • Try to include in maven, Missing artifact com.sun.java:tools:jar:1.8.0_20

    Try to include in maven, Missing artifact com.sun.java:tools:jar:1.8.0_20

     <dependency>
         <groupId>net.openhft</groupId>
         <artifactId>chronicle-map</artifactId>
         <version>1.0.2</version>
       </dependency>
    

    Using JDK 1.7

    From the 1.8 in the tools, can I assume this requires JDK 1.8?

    Thanks?

    opened by fancellu 17
  • OpenJDK 10 - compatibility issue

    OpenJDK 10 - compatibility issue

    Hi, taking a simple working code with sun jdk 1.8 and compiling/running with OpenJDK10, errors are thrown. Here is stack trace:

    net.openhft.chronicle.core.Jvm: Unable to determine max direct memory WARNING: An illegal reflective access operation has occurred WARNING: Illegal reflective access by net.openhft.chronicle.core.Jvm (file:/xxxxxxxx/net/openhft/chronicle-core/1.15.1/chronicle-core-1.15.1.jar) to field java.nio.Bits.reservedMemory WARNING: Please consider reporting this to the maintainers of net.openhft.chronicle.core.Jvm WARNING: Use --illegal-access=warn to enable warnings of further illegal reflective access operations WARNING: All illegal access operations will be denied in a future release Exception in thread "main" java.lang.IllegalStateException: Key size in serialized form must be configured in ChronicleMap, at least approximately. Use builder.averageKey()/.constantKeySizeBySample()/.averageKeySize() methods to configure the size at net.openhft.chronicle.map.ChronicleMapBuilder.preMapConstruction(ChronicleMapBuilder.java:1865) at net.openhft.chronicle.map.ChronicleMapBuilder.preMapConstruction(ChronicleMapBuilder.java:1848) at net.openhft.chronicle.map.ChronicleMapBuilder.newMap(ChronicleMapBuilder.java:1832) at net.openhft.chronicle.map.ChronicleMapBuilder.lambda$createWithFile$3(ChronicleMapBuilder.java:1633) at net.openhft.chronicle.map.ChronicleMapBuilder.lambda$fileLockedIO$1(ChronicleMapBuilder.java:257) at java.base/java.util.concurrent.ConcurrentHashMap.compute(ConcurrentHashMap.java:1922) at net.openhft.chronicle.map.ChronicleMapBuilder.fileLockedIO(ChronicleMapBuilder.java:254) at net.openhft.chronicle.map.ChronicleMapBuilder.createWithFile(ChronicleMapBuilder.java:1631) at net.openhft.chronicle.map.ChronicleMapBuilder.createPersistedTo(ChronicleMapBuilder.java:1549) at net.openhft.chronicle.map.ChronicleMapBuilder.createOrRecoverPersistedTo(ChronicleMapBuilder.java:1571) at net.openhft.chronicle.map.ChronicleMapBuilder.createOrRecoverPersistedTo(ChronicleMapBuilder.java:1560) at net.openhft.chronicle.map.ChronicleMapBuilder.createOrRecoverPersistedTo(ChronicleMapBuilder.java:1554)

    Code is

    ChronicleMap<LongValue, int[]> map = null; try { map = ChronicleMapBuilder .of(LongValue.class, int[].class) .name("map") .entries(HOW_MANY) .averageValue(new int[VALUE_LENGTH]) .createOrRecoverPersistedTo(path);

    opened by rjtokenring 16
  • "net.openhft.chronicle.core.io.IORuntimeException: java.lang.ClassNotFoundException: [Ljava/lang/Long" when using Long[] as value class

    I am seeing the following error: net.openhft.chronicle.core.io.IORuntimeException: java.lang.ClassNotFoundException: [Ljava/lang/Long at net.openhft.chronicle.wire.TextWire$TextValueIn.typeLiteral(TextWire.java:2932) at net.openhft.chronicle.map.VanillaChronicleMap.readMarshallableFields(VanillaChronicleMap.java:123) at net.openhft.chronicle.hash.impl.VanillaChronicleHash.readMarshallable(VanillaChronicleHash.java:243) at net.openhft.chronicle.wire.SerializationStrategies$1.readUsing(SerializationStrategies.java:43) at net.openhft.chronicle.wire.TextWire$TextValueIn.marshallable(TextWire.java:2975) at net.openhft.chronicle.wire.Wires.objectMap(Wires.java:442) at net.openhft.chronicle.wire.Wires.object0(Wires.java:495) at net.openhft.chronicle.wire.ValueIn.object(ValueIn.java:587) at net.openhft.chronicle.wire.TextWire$TextValueIn.objectWithInferredType0(TextWire.java:3269) at net.openhft.chronicle.wire.TextWire$TextValueIn.objectWithInferredType(TextWire.java:3242) at net.openhft.chronicle.wire.TextWire$TextValueIn.typedMarshallable(TextWire.java:3037) at net.openhft.chronicle.map.ChronicleMapBuilder.openWithExistingFile(ChronicleMapBuilder.java:1767) at net.openhft.chronicle.map.ChronicleMapBuilder.createWithFile(ChronicleMapBuilder.java:1566) at net.openhft.chronicle.map.ChronicleMapBuilder.createPersistedTo(ChronicleMapBuilder.java:1495)

    when I run the code: `File file = new File("d:/temp/_" + System.nanoTime()); //write ChronicleMap<Long, Long[]> writeMap = ChronicleMap .of(Long.class, Long[].class) .entries(1_000) .averageValue(new Long[150]) .createPersistedTo(file); Long a[]={2L}; writeMap.put(1L,a);

    //read
    ChronicleMap<Long, Long[]> readMap = ChronicleMapBuilder.of(Long.class, Long[].class) .averageValue(new Long[150]) .createPersistedTo(file);
    Long b[] = readMap.get(1L);`

    I ran"System.out.println(Long[].class.getName());" and got "[Ljava.lang.Long;". But the error log is saying"net.openhft.chronicle.core.io.IORuntimeException: java.lang.ClassNotFoundException: [Ljava/lang/Long", which is missing ";" in the end of class name. I tried step throw and found in "BytesInternal.java" line 1769, tester.isStopChar(c) returns true when c==59(;). which makes it cant return a class name with ";".

    opened by Hanalababy 15
  • Enable off heap access for arrays through proxies

    Enable off heap access for arrays through proxies

    First of all I really like openHFT, thanks for sharing!!

    I try to follow the example using the off heap direct access to primitives ( https://github.com/OpenHFT/Chronicle-Map#off-heap-storage-and-how-using-a-proxy-object-can-improve-performance ). Sadly this seems not to work with arrays. But this would be a very cool thing. Even better would be if we could act on an specific index of an array directly off heap!

    See this quick and dirty example: Hello World

    public class HftTest {
        public static void main(String[] args) {
            ChronicleMap proxyMap = ChronicleMapBuilder.of(Integer.class,
                    IArray.class).create();
    
    
            TestArray ta1 = new TestArray();
            ta1.setdata(new double[]{1, 2, 3, 4, 5});
    
            TestArray ta2 = new TestArray();
            ta1.setdata(new double[]{5, 6, 7, 8, 9});
    
            proxyMap.put(1, ta1);
            proxyMap.put(2, ta2);
    
            IArray using = proxyMap.newValueInstance();
    
            System.out.println(Arrays.toString(proxyMap.getUsing(1, using).getdata()));
            System.out.println(Arrays.toString(proxyMap.getUsing(2, using).getdata()));
        }
    }
    

    Interface

    public interface IArray extends Serializable {
        double[] getdata();
        void setdata(double[] data);
    }
    

    Implementation

    public class TestArray implements IArray {
        private double[] data;
    
        @Override
        public double[] getdata() {
            return data;
        }
    
        @Override
        public void setdata(double[] data) {
            this.data = data;
        }
    }
    
    opened by KIC 15
  • Illegal reflective access operation

    Illegal reflective access operation

    With all versions later 3.17.8 on Java 11 this message occurs:

    WARNING: An illegal reflective access operation has occurred WARNING: Illegal reflective access using Lookup on net.openhft.chronicle.core.Jvm (file:/Users/TORS/.m2/repository/net/openhft/chronicle-core/2.19.32/chronicle-core-2.19.32.jar) to class java.lang.reflect.AccessibleObject WARNING: Please consider reporting this to the maintainers of net.openhft.chronicle.core.Jvm WARNING: Use --illegal-access=warn to enable warnings of further illegal reflective access operations WARNING: All illegal access operations will be denied in a future release

    bug 
    opened by wapgui 13
  • InterProcessDeadLockException

    InterProcessDeadLockException

    I am seeing the following exception from time to time. Not sure what cause this.. It seems like the .dat file is corrupted as problem could be fixed after I regenerated the file. `Caused by: net.openhft.chronicle.hash.locks.InterProcessDeadLockException: ChronicleMap{name=null, file=E:\pva_binary_data_TODAY\secIdSymbol.dat, identityHashCode=1995022532}: Contexts locked on this segment: net.openhft.chronicle.map.impl.CompiledMapIterationContext@38391dde: used, segment 27, local state: UNLOCKED, read lock count: 0, update lock count: 0, write lock count: 0 Current thread contexts: net.openhft.chronicle.map.impl.CompiledMapQueryContext@3924d577: unused net.openhft.chronicle.map.impl.CompiledMapIterationContext@38391dde: used, segment 27, local state: UNLOCKED, read lock count: 0, update lock count: 0, write lock count: 0

    at net.openhft.chronicle.map.impl.CompiledMapIterationContext.debugContextsAndLocks(CompiledMapIterationContext.java:1798)
    at net.openhft.chronicle.map.impl.CompiledMapIterationContext.debugContextsAndLocksGuarded(CompiledMapIterationContext.java:116)
    at net.openhft.chronicle.map.impl.CompiledMapIterationContext$UpdateLock.lock(CompiledMapIterationContext.java:809)
    at net.openhft.chronicle.map.impl.CompiledMapIterationContext.forEachSegmentEntryWhile(CompiledMapIterationContext.java:3942)
    at net.openhft.chronicle.map.impl.CompiledMapIterationContext.forEachSegmentEntry(CompiledMapIterationContext.java:3948)
    at net.openhft.chronicle.map.ChronicleMapIterator.fillEntryBuffer(ChronicleMapIterator.java:61)
    at net.openhft.chronicle.map.ChronicleMapIterator.hasNext(ChronicleMapIterator.java:77)
    at java.util.Iterator.forEachRemaining(Iterator.java:115)
    at java.util.Spliterators$IteratorSpliterator.forEachRemaining(Spliterators.java:1801)
    at java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:481)
    at java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:471)
    at java.util.stream.ReduceOps$ReduceOp.evaluateSequential(ReduceOps.java:708)
    at java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234)
    at java.util.stream.ReferencePipeline.collect(ReferencePipeline.java:499)
    at com.pva.common.util.UUIDUtil.reverseGenericeMap(UUIDUtil.java:92)
    at com.pva.common.util.UUIDUtil.reverseMap(UUIDUtil.java:100)
    at com.pva.algotrading.analysis.service.api.impl.SymbolLookupImpl.loadRevMap(SymbolLookupImpl.java:57)
    ... 29 more
    

    Caused by: net.openhft.chronicle.hash.locks.InterProcessDeadLockException: Failed to acquire the lock in 60 seconds. Possible reasons:

    • The lock was not released by the previous holder. If you use contexts API, for example map.queryContext(key), in a try-with-resources block.

    • This Chronicle Map (or Set) instance is persisted to disk, and the previous process (or one of parallel accessing processes) has crashed while holding this lock. In this case you should use ChronicleMapBuilder.recoverPersistedTo() procedure to access the Chronicle Map instance.

    • A concurrent thread or process, currently holding this lock, spends unexpectedly long time (more than 60 seconds) in the context (try-with-resource block) or one of overridden interceptor methods (or MapMethods, or MapEntryOperations, or MapRemoteOperations) while performing an ordinary Map operation or replication. You should either redesign your logic to spend less time in critical sections (recommended) or acquire this lock with tryLock(time, timeUnit) method call, with sufficient time specified.

    • Segment(s) in your Chronicle Map are very large, and iteration over them takes more than 60 seconds. In this case you should acquire this lock with tryLock(time, timeUnit) method call, with longer timeout specified.

    • This is a dead lock. If you perform multi-key queries, ensure you acquire segment locks in the order (ascending by segmentIndex()), you can find an example here: https://github.com/OpenHFT/Chronicle-Map#multi-key-queries

      at net.openhft.chronicle.hash.impl.BigSegmentHeader.deadLock(BigSegmentHeader.java:71) at net.openhft.chronicle.hash.impl.BigSegmentHeader.updateLock(BigSegmentHeader.java:442) at net.openhft.chronicle.map.impl.CompiledMapIterationContext$UpdateLock.lock(CompiledMapIterationContext.java:807) ... 43 more`

    opened by Hanalababy 13
  • java.io.IOException: posix_fallocate() returned 22 on RaspberryPi (Raspbian)

    java.io.IOException: posix_fallocate() returned 22 on RaspberryPi (Raspbian)

    I am running a java application on a RaspberryPi (OS: NAME="Raspbian GNU/Linux" VERSION_ID="11" VERSION="11 (bullseye)")

    Java Version: openjdk version "11.0.16" 2022-07-19 OpenJDK Runtime Environment (build 11.0.16+8-post-Raspbian-1deb11u1) OpenJDK Server VM (build 11.0.16+8-post-Raspbian-1deb11u1, mixed mode)

    Using chronicle-map version 3.22.9

    I got following exception by creating ChronicleMap (Configuration: averageKeySize: 8, averageValueSize: 4194304, maxBloatFactor: 5.0, entries: 10)

    java.io.IOException: posix_fallocate() returned 22 at net.openhft.chronicle.hash.impl.util.jna.PosixFallocate.fallocate(PosixFallocate.java:25) at net.openhft.chronicle.hash.impl.VanillaChronicleHash.fallocate(VanillaChronicleHash.java:1128) at net.openhft.chronicle.hash.impl.VanillaChronicleHash.map(VanillaChronicleHash.java:1113) at net.openhft.chronicle.hash.impl.VanillaChronicleHash.createMappedStoreAndSegments(VanillaChronicleHash.java:515) at net.openhft.chronicle.map.ChronicleMapBuilder.createWithNewFile(ChronicleMapBuilder.java:1843) at net.openhft.chronicle.map.ChronicleMapBuilder.createWithFile(ChronicleMapBuilder.java:1741) at net.openhft.chronicle.map.ChronicleMapBuilder.recoverPersistedTo(ChronicleMapBuilder.java:1622) at net.openhft.chronicle.map.ChronicleMapBuilder.createOrRecoverPersistedTo(ChronicleMapBuilder.java:1605) at net.openhft.chronicle.map.ChronicleMapBuilder.createOrRecoverPersistedTo(ChronicleMapBuilder.java:1597) at net.openhft.chronicle.map.ChronicleMapBuilder.createOrRecoverPersistedTo(ChronicleMapBuilder.java:1591)

    There is enough space on the volume (51G free), type ext4: /dev/root ext4 59G 5,0G 51G 9% /

    The applications breaks with this exception but the file is correctly generated! When i restart the application, everything is working fine.

    Do you have any idea why posix_fallocate() returns error code 22, even the file is generated and valid?

    I added also a ChronicleHashCorruption.Listener to .createOrRecoverPersistedTo

    this listener writes to log:

    Message: file=/home/opcua-thinedge/opcua/data/deviceTypeMapping.dat: size-prefixed blob readiness bit is set to NOT_COMPLETE Message: segment headers offset of map at /home/opcua-thinedge/opcua/data/deviceTypeMapping.dat corrupted. stored: 0, should be: 4096 Message: data store size of map at /home/opcua-thinedge/opcua/data/deviceTypeMapping.dat corrupted. stored: 0, should be: 43389504

    but this is not true, after the application is stopped, the file is exactly created with the expected size!

    opened by PestusAtSAG 0
  • Bump versions-maven-plugin from 2.13.0 to 2.14.2

    Bump versions-maven-plugin from 2.13.0 to 2.14.2

    Bumps versions-maven-plugin from 2.13.0 to 2.14.2.

    Release notes

    Sourced from versions-maven-plugin's releases.

    2.14.2

    Changes

    🚀 New features and improvements

    🐛 Bug Fixes

    📦 Dependency updates

    👻 Maintenance

    2.14.1

    Changes

    🐛 Bug Fixes

    2.14.0

    Changes

    🚀 New features and improvements

    ... (truncated)

    Commits
    • 374ddab [maven-release-plugin] prepare release 2.14.2
    • 2b9bdb7 Bump wagon-provider-api from 3.5.2 to 3.5.3
    • 67c1800 Resolves #872: Make allowSnapshots an explicit argument in lookupDependencyUp...
    • 2fe2c3d Manage transitive dependencies version for security updates
    • 1130350 Upgrade com.fasterxml.woodstox:woodstox-core to 6.4.0
    • 8f2fd07 Project dependencies maintenance - move versions to dependencyManagement
    • 2bed457 Add a simple cache for ComparableVersions
    • 2d7a157 Bump actions/stale from 6 to 7
    • 4546a4e Fixes #866: Require maven 3.2.5
    • 5ee419d Bump mockito-inline from 4.9.0 to 4.10.0
    • Additional commits viewable in compare view

    Dependabot compatibility score

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
    dependencies 
    opened by dependabot[bot] 0
  • Bump xstream from 1.4.19 to 1.4.20

    Bump xstream from 1.4.19 to 1.4.20

    Bumps xstream from 1.4.19 to 1.4.20.

    Commits

    Dependabot compatibility score

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
    dependencies 
    opened by dependabot[bot] 0
  • size of ObjectOutputStream is not equal when two Maps are equal

    size of ObjectOutputStream is not equal when two Maps are equal

    In my use case my Map will contain a Map for both keys and values.

    I have run into a weird case where I found two maps that are equal but the size of ObjectOutputStream are not. ( Which ChronicleMap uses under the covers ).

    When using a java.util.HashMap my use case works as expected. When I substitute in ChronicleMap, my use case no longer works.

    Here is a link to the code that shows the issue: https://github.com/mores/maven-examples/blob/ac6012ca1236ad460f3b3767037531fdb2e3dffd/offHeapMap/src/test/java/org/test/MapTest.java#L76

    I am able to reproduce this bug on ubuntu using both 1.8.0_161 and 11.0.17

    opened by mores 2
  • Bump third-party-bom from 3.23.0 to 3.23.1

    Bump third-party-bom from 3.23.0 to 3.23.1

    Bumps third-party-bom from 3.23.0 to 3.23.1.

    Commits

    Dependabot compatibility score

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
    dependencies 
    opened by dependabot[bot] 0
Releases(chronicle-map-3.23.5)
Owner
Chronicle Software : Open Source
Open Source components of Chronicle Software
Chronicle Software : Open Source
Lightning Memory Database (LMDB) for Java: a low latency, transactional, sorted, embedded, key-value store

LMDB for Java LMDB offers: Transactions (full ACID semantics) Ordered keys (enabling very fast cursor-based iteration) Memory-mapped files (enabling o

null 680 Dec 23, 2022
Simple, fast Key-Value storage. Inspired by HaloDB

Phantom Introduction Phantom is an embedded key-value store, provides extreme high write throughput while maintains low latency data access. Phantom w

null 11 Apr 14, 2022
Carbyne Stack tuple store for secure multiparty computation

Carbyne Stack Castor Tuple Store Castor is an open source storage service for cryptographic material used in Secure Multiparty Computation, so called

Carbyne Stack 5 Oct 15, 2022
Table-Computing (Simplified as TC) is a distributed light weighted, high performance and low latency stream processing and data analysis framework. Milliseconds latency and 10+ times faster than Flink for complicated use cases.

Table-Computing Welcome to the Table-Computing GitHub. Table-Computing (Simplified as TC) is a distributed light weighted, high performance and low la

Alibaba 34 Oct 14, 2022
High Performance data structures and utility methods for Java

Agrona Agrona provides a library of data structures and utility methods that are a common need when building high-performance applications in Java. Ma

Real Logic 2.5k Jan 5, 2023
Hollow is a java library and toolset for disseminating in-memory datasets from a single producer to many consumers for high performance read-only access.

Hollow Hollow is a java library and toolset for disseminating in-memory datasets from a single producer to many consumers for high performance read-on

Netflix, Inc. 1.1k Dec 25, 2022
Java Collections till the last breadcrumb of memory and performance

Koloboke A family of projects around collections in Java (so far). The Koloboke Collections API A carefully designed extension of the Java Collections

Roman Leventov 967 Nov 14, 2022
A Primitive Collection library that reduces memory usage and improves performance

Primitive-Collections This is a Simple Primitive Collections Library i started as a hobby Project. It is based on Java's Collection Library and FastUt

Speiger 26 Dec 25, 2022
A high performance caching library for Java

Caffeine is a high performance, near optimal caching library. For more details, see our user's guide and browse the API docs for the latest release. C

Ben Manes 13k Jan 5, 2023
High performance Java implementation of a Cuckoo filter - Apache Licensed

Cuckoo Filter For Java This library offers a similar interface to Guava's Bloom filters. In most cases it can be used interchangeably and has addition

Mark Gunlogson 161 Dec 30, 2022
High Performance Primitive Collections for Java

HPPC: High Performance Primitive Collections Collections of primitive types (maps, sets, stacks, lists) with open internals and an API twist (no java.

Carrot Search 890 Dec 28, 2022
Simple Binary Encoding (SBE) - High Performance Message Codec

Simple Binary Encoding (SBE) SBE is an OSI layer 6 presentation for encoding and decoding binary application messages for low-latency financial applic

Real Logic 2.8k Dec 28, 2022
Eclipse Collections is a collections framework for Java with optimized data structures and a rich, functional and fluent API.

English | 中文 | Deutsch | Español | Ελληνικά | Français | 日本語 | Norsk (bokmål) | Português-Brasil | Русский | हिंदी Eclipse Collections is a comprehens

Eclipse Foundation 2.1k Dec 29, 2022
A Java library for quickly and efficiently parsing and writing UUIDs

fast-uuid fast-uuid is a Java library for quickly and efficiently parsing and writing UUIDs. It yields the most dramatic performance gains when compar

Jon Chambers 142 Jan 1, 2023
gRPC and protocol buffers for Android, Kotlin, and Java.

Wire “A man got to have a code!” - Omar Little See the project website for documentation and APIs. As our teams and programs grow, the variety and vol

Square 3.9k Jan 5, 2023
The Java collections framework provides a set of interfaces and classes to implement various data structures and algorithms.

Homework #14 Table of Contents General Info Technologies Used Project Status Contact General Information Homework contains topics: Sorting an ArrayLis

Mykhailo 1 Feb 12, 2022
Fault tolerance and resilience patterns for the JVM

Failsafe Failsafe is a lightweight, zero-dependency library for handling failures in Java 8+, with a concise API for handling everyday use cases and t

Jonathan Halterman 3.9k Jan 2, 2023