Replicate your Key Value Store across your network, with consistency, persistance and performance.

Overview

Chronicle Map

docs\images\Map line

Version

badge

javadoc

Overview

Chronicle Map is a super-fast, in-memory, non-blocking, key-value store, designed for low-latency, and/or multi-process applications such as trading and financial market applications. See Features doc for more information.

The size of a Chronicle Map is not limited by memory (RAM), but rather by the available disk capacity.

docs\images\CM Overview

Use cases

Chronicle Map is used in production around the world for:

  • real-time trading systems. Chronicle Map provides in-memory access speeds, and supports ultra-low garbage collection. Chronicle Map can support the most demanding of applications.

  • highly concurrent systems. Chronicle Map supports multiple readers and writers, distributed across multiple machines.

Why use Chronicle Map?

Chronicle Map is:

  • fast. Millions of operations per second, with low and stable microsecond latencies for reads and writes. Write queries scale well up to the number of hardware execution threads in the server. Read queries never block each other.

  • reliable. Chronicle Software have a “chaos monkey” test which verifies Chronicle Map multi-master replication in the face of node and network failures. The map can optionally be persisted to disk.

  • in production at banks and hedge funds, globally.

  • built using lessons learnt from real-world experience solving real-world problems.

  • open source (standard version), and in use at hundreds of sites around the world.

Our offering

Chronicle Software provides full support for Chronicle Map, consulting to help you make best use of the product, and can also deliver projects using a mix of our resources and your own.

Replication Environment Example

The following diagram shows an example of Chronicle Map replication over three servers (or sites). Chronicle Map Replication is part of Chronicle Map (Enterprise Edition); a commercially supported version of our successful open source Chronicle Map.

docs\images\Configure Three Way Replication

Replication is multi-master, lock-free, redundant, deterministic, and eventually consistent.

The writer can optionally wait for replication to occur across nodes or regions.

Note
See Chronicle Map Replication for more information.

Documentation

The Chronicle Map documentation comprises:

Table 1. Documentation

Document

Purpose

CM_Features

Features description.

CM_Replication

Replication explanation.

CM_Tutorial

Tutorial.

CM_FAQs

Frequently asked questions.

CM_Download

Downloading the software.

CM_Updates

Updates from Chronicle Map version 2.

CM_Compatibility_and_Versioning

Compatibility and Versioning description.

Linked documentation is contained in the docs folder.

Comments
  • Stateless client - batch and/or async

    Stateless client - batch and/or async

    I am investigating Chronicle Map as a potential replacement of Redis - with concurrency in mind. In our architecture we would be looking to replace a "large" Redis instance that currently has multiple clients connecting to it causing latency pileups due to Redis' blocking nature.

    The issue is that we need to make requests in random batches of ~1000. With Redis we are able to make a single request via a Lua script (or multi-get / multi-set commands) and receive a single response. In the documentation on Chronicle Maps stateless client I see that the remote calls are blocking and can be made only one key at a time, so for us the solution is not obvious.

    While I am considering passing off each individual key task to a threadpool running X blocking threads at a time, I wonder if there might be a better solution that could take advantage of doing RPC in batches and perhaps work asynchronously. As I do not see this available currently, my questions are whether this is an enhancement you might consider or if you could perhaps point me to if/how we could write our own solution for doing this - which we'd be open to contributing back...

    enhancement 2.x 
    opened by dmk23 32
  • ChronicleSet is not thread safe

    ChronicleSet is not thread safe

    Hello,

    I am running some tests for ChronicleSet of strings that is supposed to be used concurrent by 5 threads and with the following code I am getting "net.openhft.chronicle.hash.locks.IllegalInterProcessLockStateException: Must not acquire update lock, while read lock is already held by this thread"

    Here the code snippet:

    public class TestChronicleSet {

    public static <H extends ChronicleHash<K,?>,  B extends ChronicleHashBuilder<K, H, B>, K> H init(B builder, int entrySize, int averageKeySize, String fileName) {
    
        String tmp = System.getProperty("java.io.tmpdir");
        String pathname = tmp + "/" + fileName;
        File file = new File(pathname);
        try {
            H result = builder.entries(entrySize).averageKeySize(averageKeySize).createPersistedTo(file);
            return result;
        } catch (IOException ioe) {
            throw new RuntimeException(ioe);
        }
    }
    
    public synchronized static <A> ChronicleSet<A> initSet(Class<A> entryClass, int entrySize, int averageKeySize, String fileName) {
        return init(ChronicleSetBuilder.of(entryClass), entrySize, averageKeySize, fileName);
    }
    
    public static void main(String[] args) {
        ChronicleSet<String> set = initSet(String.class, 1_000_000, 30, "stringSet.dat");
        ExecutorService executor = Executors.newFixedThreadPool(5);
        for (int i = 0; i < 10; i++) {
            Runnable worker = new WorkerThread(set);
            executor.execute(worker);
        }
        executor.shutdown();
        while (!executor.isTerminated()) {
    
        }
        System.out.println("Finished all threads");
    
    
    }
    
    public static class WorkerThread implements Runnable {
        private final ChronicleSet<String> set;
        public WorkerThread(ChronicleSet<String> set){
            this.set = set;
        }
    
        @Override
        public void run() {
            System.out.println(Thread.currentThread().getName()+" Start. Command = " + set.size());
            processCommand();
            System.out.println(Thread.currentThread().getName()+" End.");
        }
    
        private void processCommand() {
            Set<String> nomenclatures = new HashSet<>();
            for(int i = 0; i < 10; i++) {
                String nomenclature = "#############################" + i;
                nomenclatures.add(nomenclature);
    
            }
    
            set.addAll(nomenclatures);
    
    
            Set<String> strings = CollectionUtil.randomSubset(nomenclatures, new Random());
    
            set.addAll(strings);
    
    
            Set<String> toRemove = new HashSet<>();
            Random generator = new Random();
            for (int j = 0; j < 3; j++) {
                int i = generator.nextInt(10);
                String nomenclature = "#############################" + i;
                toRemove.add(nomenclature);
            }
            set.removeAll(toRemove);
    
            for (String s : set) {
                System.out.println(s);
            }
    
            strings = CollectionUtil.randomSubset(nomenclatures, new Random());
    
            set.addAll(strings);
    
            for (String s : set) {
                System.out.println(s);
            }
        }
    
    }
    

    }

    Since ChronicleSet is almost the same as ChronicleMap I expect that it will be thread safe. Can you please have a look at the example code and let me know whether it is a bug into ChronicleSet implementation or I am supposed to synchronize the code for addAll/removeAll operations on the application side?

    Thanks, Radoslav

    opened by rsmilyanov 23
  • MapEntryStages.innerDefaultReplaceValue()

    MapEntryStages.innerDefaultReplaceValue()

    Regarding this commit: https://github.com/OpenHFT/Chronicle-Map/commit/79c7701a52058361150f7d2e7139cbd0d4211a9f, @RobAustin there was a reason why the code was written the way it was - it's that during relocation a different value alignment might be required, therefore a different number of chunks might be needed.

    opened by leventov 20
  • Documentation refers to ChronicleMapBuilder#entrySize,keySize but missing on Object

    Documentation refers to ChronicleMapBuilder#entrySize,keySize but missing on Object

    I'm using 2.1.10 and trying to change the entrySize appropriately but apparently this was removed at some point. Documenation still refers to this function (https://github.com/OpenHFT/Chronicle-Map#dont-forget-to-set-the-entrysize) and the exception that I receive also states this as a solution.

    java.lang.IllegalStateException: We try to figure out size of objects in serialized form, but it exceeds 16777216 bytes. We assume this is a error and throw exception at this point. If you really want larger keys/values, use ChronicleMapBuilder.keySize(int)/valueSize(int)/entrySize(int) configurations

    What is the recommended way to set the entry size on current releases?

    opened by psg9999 20
  • Runtime compatibility with Java 9

    Runtime compatibility with Java 9

    Chronicle-Map is currently not working with upcoming JRE 9. So far, I've identified following problems (tested with 9+179):

    • ByteBufferDataAccess is using sun.nio.ch.DirectBuffer which won't be accessible anymore. As MappedByteBuffer is the only ByteBuffer which is a DirectBuffer, this should be easy to resolve by changing getData to:
    public Data<ByteBuffer> getData(@NotNull ByteBuffer instance) {
        bb = instance;
        if (instance instanceof MappedByteBuffer) {
            nativeBytesStore.init(instance, false);
            bytesStore = nativeBytesStore;
        } else {
            heapBytesStore.init(instance);
            bytesStore = heapBytesStore;
        }
        return this;
    }
    
    • VanillaChronicleHash is using sun.misc.Cleaner which won't be available anymore.
    enhancement 
    opened by mstrap 17
  • Try to include in maven, Missing artifact com.sun.java:tools:jar:1.8.0_20

    Try to include in maven, Missing artifact com.sun.java:tools:jar:1.8.0_20

     <dependency>
         <groupId>net.openhft</groupId>
         <artifactId>chronicle-map</artifactId>
         <version>1.0.2</version>
       </dependency>
    

    Using JDK 1.7

    From the 1.8 in the tools, can I assume this requires JDK 1.8?

    Thanks?

    opened by fancellu 17
  • OpenJDK 10 - compatibility issue

    OpenJDK 10 - compatibility issue

    Hi, taking a simple working code with sun jdk 1.8 and compiling/running with OpenJDK10, errors are thrown. Here is stack trace:

    net.openhft.chronicle.core.Jvm: Unable to determine max direct memory WARNING: An illegal reflective access operation has occurred WARNING: Illegal reflective access by net.openhft.chronicle.core.Jvm (file:/xxxxxxxx/net/openhft/chronicle-core/1.15.1/chronicle-core-1.15.1.jar) to field java.nio.Bits.reservedMemory WARNING: Please consider reporting this to the maintainers of net.openhft.chronicle.core.Jvm WARNING: Use --illegal-access=warn to enable warnings of further illegal reflective access operations WARNING: All illegal access operations will be denied in a future release Exception in thread "main" java.lang.IllegalStateException: Key size in serialized form must be configured in ChronicleMap, at least approximately. Use builder.averageKey()/.constantKeySizeBySample()/.averageKeySize() methods to configure the size at net.openhft.chronicle.map.ChronicleMapBuilder.preMapConstruction(ChronicleMapBuilder.java:1865) at net.openhft.chronicle.map.ChronicleMapBuilder.preMapConstruction(ChronicleMapBuilder.java:1848) at net.openhft.chronicle.map.ChronicleMapBuilder.newMap(ChronicleMapBuilder.java:1832) at net.openhft.chronicle.map.ChronicleMapBuilder.lambda$createWithFile$3(ChronicleMapBuilder.java:1633) at net.openhft.chronicle.map.ChronicleMapBuilder.lambda$fileLockedIO$1(ChronicleMapBuilder.java:257) at java.base/java.util.concurrent.ConcurrentHashMap.compute(ConcurrentHashMap.java:1922) at net.openhft.chronicle.map.ChronicleMapBuilder.fileLockedIO(ChronicleMapBuilder.java:254) at net.openhft.chronicle.map.ChronicleMapBuilder.createWithFile(ChronicleMapBuilder.java:1631) at net.openhft.chronicle.map.ChronicleMapBuilder.createPersistedTo(ChronicleMapBuilder.java:1549) at net.openhft.chronicle.map.ChronicleMapBuilder.createOrRecoverPersistedTo(ChronicleMapBuilder.java:1571) at net.openhft.chronicle.map.ChronicleMapBuilder.createOrRecoverPersistedTo(ChronicleMapBuilder.java:1560) at net.openhft.chronicle.map.ChronicleMapBuilder.createOrRecoverPersistedTo(ChronicleMapBuilder.java:1554)

    Code is

    ChronicleMap<LongValue, int[]> map = null; try { map = ChronicleMapBuilder .of(LongValue.class, int[].class) .name("map") .entries(HOW_MANY) .averageValue(new int[VALUE_LENGTH]) .createOrRecoverPersistedTo(path);

    opened by rjtokenring 16
  • "net.openhft.chronicle.core.io.IORuntimeException: java.lang.ClassNotFoundException: [Ljava/lang/Long" when using Long[] as value class

    I am seeing the following error: net.openhft.chronicle.core.io.IORuntimeException: java.lang.ClassNotFoundException: [Ljava/lang/Long at net.openhft.chronicle.wire.TextWire$TextValueIn.typeLiteral(TextWire.java:2932) at net.openhft.chronicle.map.VanillaChronicleMap.readMarshallableFields(VanillaChronicleMap.java:123) at net.openhft.chronicle.hash.impl.VanillaChronicleHash.readMarshallable(VanillaChronicleHash.java:243) at net.openhft.chronicle.wire.SerializationStrategies$1.readUsing(SerializationStrategies.java:43) at net.openhft.chronicle.wire.TextWire$TextValueIn.marshallable(TextWire.java:2975) at net.openhft.chronicle.wire.Wires.objectMap(Wires.java:442) at net.openhft.chronicle.wire.Wires.object0(Wires.java:495) at net.openhft.chronicle.wire.ValueIn.object(ValueIn.java:587) at net.openhft.chronicle.wire.TextWire$TextValueIn.objectWithInferredType0(TextWire.java:3269) at net.openhft.chronicle.wire.TextWire$TextValueIn.objectWithInferredType(TextWire.java:3242) at net.openhft.chronicle.wire.TextWire$TextValueIn.typedMarshallable(TextWire.java:3037) at net.openhft.chronicle.map.ChronicleMapBuilder.openWithExistingFile(ChronicleMapBuilder.java:1767) at net.openhft.chronicle.map.ChronicleMapBuilder.createWithFile(ChronicleMapBuilder.java:1566) at net.openhft.chronicle.map.ChronicleMapBuilder.createPersistedTo(ChronicleMapBuilder.java:1495)

    when I run the code: `File file = new File("d:/temp/_" + System.nanoTime()); //write ChronicleMap<Long, Long[]> writeMap = ChronicleMap .of(Long.class, Long[].class) .entries(1_000) .averageValue(new Long[150]) .createPersistedTo(file); Long a[]={2L}; writeMap.put(1L,a);

    //read
    ChronicleMap<Long, Long[]> readMap = ChronicleMapBuilder.of(Long.class, Long[].class) .averageValue(new Long[150]) .createPersistedTo(file);
    Long b[] = readMap.get(1L);`

    I ran"System.out.println(Long[].class.getName());" and got "[Ljava.lang.Long;". But the error log is saying"net.openhft.chronicle.core.io.IORuntimeException: java.lang.ClassNotFoundException: [Ljava/lang/Long", which is missing ";" in the end of class name. I tried step throw and found in "BytesInternal.java" line 1769, tester.isStopChar(c) returns true when c==59(;). which makes it cant return a class name with ";".

    opened by Hanalababy 15
  • Enable off heap access for arrays through proxies

    Enable off heap access for arrays through proxies

    First of all I really like openHFT, thanks for sharing!!

    I try to follow the example using the off heap direct access to primitives ( https://github.com/OpenHFT/Chronicle-Map#off-heap-storage-and-how-using-a-proxy-object-can-improve-performance ). Sadly this seems not to work with arrays. But this would be a very cool thing. Even better would be if we could act on an specific index of an array directly off heap!

    See this quick and dirty example: Hello World

    public class HftTest {
        public static void main(String[] args) {
            ChronicleMap proxyMap = ChronicleMapBuilder.of(Integer.class,
                    IArray.class).create();
    
    
            TestArray ta1 = new TestArray();
            ta1.setdata(new double[]{1, 2, 3, 4, 5});
    
            TestArray ta2 = new TestArray();
            ta1.setdata(new double[]{5, 6, 7, 8, 9});
    
            proxyMap.put(1, ta1);
            proxyMap.put(2, ta2);
    
            IArray using = proxyMap.newValueInstance();
    
            System.out.println(Arrays.toString(proxyMap.getUsing(1, using).getdata()));
            System.out.println(Arrays.toString(proxyMap.getUsing(2, using).getdata()));
        }
    }
    

    Interface

    public interface IArray extends Serializable {
        double[] getdata();
        void setdata(double[] data);
    }
    

    Implementation

    public class TestArray implements IArray {
        private double[] data;
    
        @Override
        public double[] getdata() {
            return data;
        }
    
        @Override
        public void setdata(double[] data) {
            this.data = data;
        }
    }
    
    opened by KIC 15
  • Illegal reflective access operation

    Illegal reflective access operation

    With all versions later 3.17.8 on Java 11 this message occurs:

    WARNING: An illegal reflective access operation has occurred WARNING: Illegal reflective access using Lookup on net.openhft.chronicle.core.Jvm (file:/Users/TORS/.m2/repository/net/openhft/chronicle-core/2.19.32/chronicle-core-2.19.32.jar) to class java.lang.reflect.AccessibleObject WARNING: Please consider reporting this to the maintainers of net.openhft.chronicle.core.Jvm WARNING: Use --illegal-access=warn to enable warnings of further illegal reflective access operations WARNING: All illegal access operations will be denied in a future release

    bug 
    opened by wapgui 13
  • InterProcessDeadLockException

    InterProcessDeadLockException

    I am seeing the following exception from time to time. Not sure what cause this.. It seems like the .dat file is corrupted as problem could be fixed after I regenerated the file. `Caused by: net.openhft.chronicle.hash.locks.InterProcessDeadLockException: ChronicleMap{name=null, file=E:\pva_binary_data_TODAY\secIdSymbol.dat, identityHashCode=1995022532}: Contexts locked on this segment: net.openhft.chronicle.map.impl.CompiledMapIterationContext@38391dde: used, segment 27, local state: UNLOCKED, read lock count: 0, update lock count: 0, write lock count: 0 Current thread contexts: net.openhft.chronicle.map.impl.CompiledMapQueryContext@3924d577: unused net.openhft.chronicle.map.impl.CompiledMapIterationContext@38391dde: used, segment 27, local state: UNLOCKED, read lock count: 0, update lock count: 0, write lock count: 0

    at net.openhft.chronicle.map.impl.CompiledMapIterationContext.debugContextsAndLocks(CompiledMapIterationContext.java:1798)
    at net.openhft.chronicle.map.impl.CompiledMapIterationContext.debugContextsAndLocksGuarded(CompiledMapIterationContext.java:116)
    at net.openhft.chronicle.map.impl.CompiledMapIterationContext$UpdateLock.lock(CompiledMapIterationContext.java:809)
    at net.openhft.chronicle.map.impl.CompiledMapIterationContext.forEachSegmentEntryWhile(CompiledMapIterationContext.java:3942)
    at net.openhft.chronicle.map.impl.CompiledMapIterationContext.forEachSegmentEntry(CompiledMapIterationContext.java:3948)
    at net.openhft.chronicle.map.ChronicleMapIterator.fillEntryBuffer(ChronicleMapIterator.java:61)
    at net.openhft.chronicle.map.ChronicleMapIterator.hasNext(ChronicleMapIterator.java:77)
    at java.util.Iterator.forEachRemaining(Iterator.java:115)
    at java.util.Spliterators$IteratorSpliterator.forEachRemaining(Spliterators.java:1801)
    at java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:481)
    at java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:471)
    at java.util.stream.ReduceOps$ReduceOp.evaluateSequential(ReduceOps.java:708)
    at java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234)
    at java.util.stream.ReferencePipeline.collect(ReferencePipeline.java:499)
    at com.pva.common.util.UUIDUtil.reverseGenericeMap(UUIDUtil.java:92)
    at com.pva.common.util.UUIDUtil.reverseMap(UUIDUtil.java:100)
    at com.pva.algotrading.analysis.service.api.impl.SymbolLookupImpl.loadRevMap(SymbolLookupImpl.java:57)
    ... 29 more
    

    Caused by: net.openhft.chronicle.hash.locks.InterProcessDeadLockException: Failed to acquire the lock in 60 seconds. Possible reasons:

    • The lock was not released by the previous holder. If you use contexts API, for example map.queryContext(key), in a try-with-resources block.

    • This Chronicle Map (or Set) instance is persisted to disk, and the previous process (or one of parallel accessing processes) has crashed while holding this lock. In this case you should use ChronicleMapBuilder.recoverPersistedTo() procedure to access the Chronicle Map instance.

    • A concurrent thread or process, currently holding this lock, spends unexpectedly long time (more than 60 seconds) in the context (try-with-resource block) or one of overridden interceptor methods (or MapMethods, or MapEntryOperations, or MapRemoteOperations) while performing an ordinary Map operation or replication. You should either redesign your logic to spend less time in critical sections (recommended) or acquire this lock with tryLock(time, timeUnit) method call, with sufficient time specified.

    • Segment(s) in your Chronicle Map are very large, and iteration over them takes more than 60 seconds. In this case you should acquire this lock with tryLock(time, timeUnit) method call, with longer timeout specified.

    • This is a dead lock. If you perform multi-key queries, ensure you acquire segment locks in the order (ascending by segmentIndex()), you can find an example here: https://github.com/OpenHFT/Chronicle-Map#multi-key-queries

      at net.openhft.chronicle.hash.impl.BigSegmentHeader.deadLock(BigSegmentHeader.java:71) at net.openhft.chronicle.hash.impl.BigSegmentHeader.updateLock(BigSegmentHeader.java:442) at net.openhft.chronicle.map.impl.CompiledMapIterationContext$UpdateLock.lock(CompiledMapIterationContext.java:807) ... 43 more`

    opened by Hanalababy 13
  • Bump versions-maven-plugin from 2.13.0 to 2.14.2

    Bump versions-maven-plugin from 2.13.0 to 2.14.2

    Bumps versions-maven-plugin from 2.13.0 to 2.14.2.

    Release notes

    Sourced from versions-maven-plugin's releases.

    2.14.2

    Changes

    🚀 New features and improvements

    🐛 Bug Fixes

    📦 Dependency updates

    👻 Maintenance

    2.14.1

    Changes

    🐛 Bug Fixes

    2.14.0

    Changes

    🚀 New features and improvements

    ... (truncated)

    Commits
    • 374ddab [maven-release-plugin] prepare release 2.14.2
    • 2b9bdb7 Bump wagon-provider-api from 3.5.2 to 3.5.3
    • 67c1800 Resolves #872: Make allowSnapshots an explicit argument in lookupDependencyUp...
    • 2fe2c3d Manage transitive dependencies version for security updates
    • 1130350 Upgrade com.fasterxml.woodstox:woodstox-core to 6.4.0
    • 8f2fd07 Project dependencies maintenance - move versions to dependencyManagement
    • 2bed457 Add a simple cache for ComparableVersions
    • 2d7a157 Bump actions/stale from 6 to 7
    • 4546a4e Fixes #866: Require maven 3.2.5
    • 5ee419d Bump mockito-inline from 4.9.0 to 4.10.0
    • Additional commits viewable in compare view

    Dependabot compatibility score

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
    dependencies 
    opened by dependabot[bot] 0
  • Bump xstream from 1.4.19 to 1.4.20

    Bump xstream from 1.4.19 to 1.4.20

    Bumps xstream from 1.4.19 to 1.4.20.

    Commits

    Dependabot compatibility score

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
    dependencies 
    opened by dependabot[bot] 0
  • size of ObjectOutputStream is not equal when two Maps are equal

    size of ObjectOutputStream is not equal when two Maps are equal

    In my use case my Map will contain a Map for both keys and values.

    I have run into a weird case where I found two maps that are equal but the size of ObjectOutputStream are not. ( Which ChronicleMap uses under the covers ).

    When using a java.util.HashMap my use case works as expected. When I substitute in ChronicleMap, my use case no longer works.

    Here is a link to the code that shows the issue: https://github.com/mores/maven-examples/blob/ac6012ca1236ad460f3b3767037531fdb2e3dffd/offHeapMap/src/test/java/org/test/MapTest.java#L76

    I am able to reproduce this bug on ubuntu using both 1.8.0_161 and 11.0.17

    opened by mores 2
  • Bump third-party-bom from 3.23.0 to 3.23.1

    Bump third-party-bom from 3.23.0 to 3.23.1

    Bumps third-party-bom from 3.23.0 to 3.23.1.

    Commits

    Dependabot compatibility score

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
    dependencies 
    opened by dependabot[bot] 0
  • NoUpperBoundChunksPerEntryTest.noUpperBoundChunksPerEntryTest is flaky

    NoUpperBoundChunksPerEntryTest.noUpperBoundChunksPerEntryTest is flaky

    java.lang.IllegalArgumentException: _firstFreeTierIndex should be in [0, 1099511627775] range, 1696615168607488 is given
      at net.openhft.chronicle.hash.VanillaGlobalMutableState$$Native.setFirstFreeTierIndex(VanillaGlobalMutableState$$Native.java:40)
      at net.openhft.chronicle.hash.impl.VanillaChronicleHash.allocateTier(VanillaChronicleHash.java:903)
      at net.openhft.chronicle.map.impl.CompiledMapQueryContext.nextTier(CompiledMapQueryContext.java:3115)
      at net.openhft.chronicle.map.impl.CompiledMapQueryContext.alloc(CompiledMapQueryContext.java:3476)
      at net.openhft.chronicle.map.impl.CompiledMapQueryContext.initEntryAndKey(CompiledMapQueryContext.java:3494)
      at net.openhft.chronicle.map.impl.CompiledMapQueryContext.putEntry(CompiledMapQueryContext.java:3987)
      at net.openhft.chronicle.map.impl.CompiledMapQueryContext.doInsert(CompiledMapQueryContext.java:4176)
      at net.openhft.chronicle.map.MapEntryOperations.insert(MapEntryOperations.java:153)
      at net.openhft.chronicle.map.impl.CompiledMapQueryContext.insert(CompiledMapQueryContext.java:4099)
      at net.openhft.chronicle.map.MapMethods.put(MapMethods.java:89)
      at net.openhft.chronicle.map.VanillaChronicleMap.put(VanillaChronicleMap.java:901)
      at net.openhft.chronicle.map.NoUpperBoundChunksPerEntryTest.noUpperBoundChunksPerEntryTest(NoUpperBoundChunksPerEntryTest.java:33)
      at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    
    flaky 
    opened by alamar 0
Releases(chronicle-map-3.23.5)
Owner
Chronicle Software : Open Source
Open Source components of Chronicle Software
Chronicle Software : Open Source
CrateDB is a distributed SQL database that makes it simple to store and analyze massive amounts of machine data in real-time.

About CrateDB is a distributed SQL database that makes it simple to store and analyze massive amounts of machine data in real-time. CrateDB offers the

Crate.io 3.6k Jan 2, 2023
A distributed in-memory data store for the cloud

EVCache EVCache is a memcached & spymemcached based caching solution that is mainly used for AWS EC2 infrastructure for caching frequently used data.

Netflix, Inc. 1.9k Jan 2, 2023
New-fangled Timeseries Data Store

Newts Newts is a time-series data store based on Apache Cassandra. Features High throughput Newts is built upon Apache Cassandra, a write-optimized, f

OpenNMS 190 Oct 3, 2022
Realtime SOS Android Application. Location (GPS + Cellular Network) tracing application by alerting guardians of the User.

WomenSaftey Women Safety Android Application: Realtime SOS Android Application. Designed a Location (GPS + Cellular Network) tracing application by al

jatin kasera 6 Nov 19, 2022
Apache Druid: a high performance real-time analytics database.

Website | Documentation | Developer Mailing List | User Mailing List | Slack | Twitter | Download Apache Druid Druid is a high performance real-time a

The Apache Software Foundation 12.3k Jan 1, 2023
光 HikariCP・A solid, high-performance, JDBC connection pool at last.

HikariCP It's Faster.Hi·ka·ri [hi·ka·'lē] (Origin: Japanese): light; ray. Fast, simple, reliable. HikariCP is a "zero-overhead" production ready JDBC

Brett Wooldridge 17.7k Jan 1, 2023
Microstream - High-Performance Java-Native-Persistence

Microstream - High-Performance Java-Native-Persistence. Store and load any Java Object Graph or Subgraphs partially, Relieved of Heavy-weight JPA. Microsecond Response Time. Ultra-High Throughput. Minimum of Latencies. Create Ultra-Fast In-Memory Database Applications & Microservices.

MicroStream 410 Dec 28, 2022
SceneView is a 3D/AR Android View with ARCore and Google Filament. This is the newest way to make your Android 3D/AR app.

SceneView is a 3D/AR Android View with ARCore and Google Filament This is Sceneform replacement Features Use SceneView for 3D only or ArSceneView for

SceneView Open Community 235 Jan 4, 2023
sql2o is a small library, which makes it easy to convert the result of your sql-statements into objects. No resultset hacking required. Kind of like an orm, but without the sql-generation capabilities. Supports named parameters.

sql2o Sql2o is a small java library, with the purpose of making database interaction easy. When fetching data from the database, the ResultSet will au

Lars Aaberg 1.1k Dec 28, 2022
Battlecode 2022 scaffold! Run your bots locally

Battlecode 2022 Scaffold This is the Battlecode 2022 scaffold, containing an examplefuncsplayer. Read https://play.battlecode.org/getting-started! Pro

Battlecode 7 Sep 24, 2022
Search the Maven Central Repository from your command line!

Maven Central Search Use Maven Central Repository Search from your command line! Use mcs to quickly lookup dependency coordinates in Maven Central, wi

Maarten Mulders 102 Dec 21, 2022
MapDB provides concurrent Maps, Sets and Queues backed by disk storage or off-heap-memory. It is a fast and easy to use embedded Java database engine.

MapDB: database engine MapDB combines embedded database engine and Java collections. It is free under Apache 2 license. MapDB is flexible and can be u

Jan Kotek 4.6k Dec 30, 2022
MapDB provides concurrent Maps, Sets and Queues backed by disk storage or off-heap-memory. It is a fast and easy to use embedded Java database engine.

MapDB: database engine MapDB combines embedded database engine and Java collections. It is free under Apache 2 license. MapDB is flexible and can be u

Jan Kotek 4.6k Jan 1, 2023
jdbi is designed to provide convenient tabular data access in Java; including templated SQL, parameterized and strongly typed queries, and Streams integration

The Jdbi library provides convenient, idiomatic access to relational databases in Java. Jdbi is built on top of JDBC. If your database has a JDBC driv

null 1.7k Dec 27, 2022
A RatingBar library for android, you can customize size, spacing, color and image easily, and support right to left.

AndRatingBar A RatingBar library for android, you can customize size, spacing, color and image easily, and support right to left. 安卓RatingBar终极方案,继承自原

dqq 271 Aug 14, 2021
esProc SPL is a scripting language for data processing, with well-designed rich library functions and powerful syntax, which can be executed in a Java program through JDBC interface and computing independently.

esProc esProc is the unique name for esProc SPL package. esProc SPL is an open-source programming language for data processing, which can perform comp

null 990 Dec 27, 2022
eXist Native XML Database and Application Platform

eXist-db Native XML Database eXist-db is a high-performance open source native XML database—a NoSQL document database and application platform built e

eXist-db.org 363 Dec 30, 2022