Lightning Memory Database (LMDB) for Java: a low latency, transactional, sorted, embedded, key-value store

Overview

Maven Build and Deployment codecov Javadocs Maven Central Total alerts Language grade: Java

LMDB for Java

LMDB offers:

  • Transactions (full ACID semantics)
  • Ordered keys (enabling very fast cursor-based iteration)
  • Memory-mapped files (enabling optimal OS-level memory management)
  • Zero copy design (no serialization or memory copy overhead)
  • No blocking between readers and writers
  • Configuration-free (no need to "tune" it to your storage)
  • Instant crash recovery (no logs, journals or other complexity)
  • Minimal file handle consumption (just one data file; not 100,000's like some stores)
  • Same-thread operation (LMDB is invoked within your application thread; no compactor thread is needed)
  • Freedom from application-side data caching (memory-mapped files are more efficient)
  • Multi-threading support (each thread can have its own MVCC-isolated transaction)
  • Multi-process support (on the same host with a local file system)
  • Atomic hot backups

LmdbJava adds Java-specific features to LMDB:

  • Extremely fast across a broad range of benchmarks, data sizes and access patterns
  • Modern, idiomatic Java API (including iterators, key ranges, enums, exceptions etc)
  • Nothing to install (the JAR embeds the latest LMDB libraries for Linux, OS X and Windows)
  • Buffer agnostic (Java ByteBuffer, Agrona DirectBuffer, Netty ByteBuf, your own buffer)
  • 100% stock-standard, officially-released, widely-tested LMDB C code (no extra C/JNI code)
  • Low latency design (allocation-free; buffer pools; optional checks can be easily disabled in production etc)
  • Mature code (commenced in 2016) and used for heavy production workloads (eg > 500 TB of HFT data)
  • Actively maintained and with a "Zero Bug Policy" before every release (see issues)
  • Available from Maven Central and OSS Sonatype Snapshots
  • Continuous integration testing on Linux, Windows and macOS with Java 8, 11 and 14

Performance

img

img

Full details are in the latest benchmark report.

Documentation

Support

We're happy to help you use LmdbJava. Simply open a GitHub issue if you have any questions.

Contributing

Contributions are welcome! Please see the Contributing Guidelines.

License

This project is licensed under the Apache License, Version 2.0.

This project distribution JAR includes LMDB, which is licensed under The OpenLDAP Public License.

Comments
  • add Dbi<byte[]> support

    add Dbi support

    In some contexts (such as migrating from legacy lmdbjni/leveldbjni APIs) it would be nice to have a Dbi<byte[]> instead of wrapping arrays with ByteBuffers and copying out the data in the caller. On first glance the design seems to indicate this should be accomplished with a BufferProxy implementation – but it's not clear how... eg. at the point of the allocate() call the size is unknown, etc.

    opened by phraktle 48
  • What does Environment maxreaders reached (-30790) actually mean?

    What does Environment maxreaders reached (-30790) actually mean?

    I successfully populated the Dbi with the data I am now trying to get. But when I try to get it I get this error, which does not give me enough information to understand what the problem is, as the term "reader" is not mentioned in the TestTutorial.java code even once.

    Caused by: org.lmdbjava.Env$ReadersFullException: Environment maxreaders reached (-30790)
    at org.lmdbjava.ResultCodeMapper.checkRc(ResultCodeMapper.java:98)
    at org.lmdbjava.Txn.<init>(Txn.java:67)
    at org.lmdbjava.Env.txn(Env.java:361)
    at org.lmdbjava.Env.txnRead(Env.java:370)
    at uk.co.example.LmDbStore.getLatestDemoQCBBlock(LmDbStore.java:564)
    

    Here is the env setup code:

     Dbi<ByteBuffer> demoenv = create()
                .setMapSize(20971520l)
                .setMaxDbs(5)
                .open(new File(csEnvironmentStr+"/csdbdemo"));
    	demoQcbStore = demoenv.openDbi("demoQcbStoreLocn", MDB_CREATE);
    

    Here is how I store the qcbblock:

    final ByteBuffer key = allocateDirect(40);
        ByteBuffer val = null;
    	try {
    		val = toByteArray(qcb);
    	} catch (IOException e) {
    		logger.error("", e);			
    	}
    
        try (Txn<ByteBuffer> txn = demoenv.txnWrite()) {    	
            key.put(id.getBytes(UTF_8)).flip();
            demoQcbStore.put(txn, key, val);
            // An explicit commit is required, otherwise Txn.close() rolls it back.
            txn.commit();
        }
    ...
    // toByteArray and toObject are taken from: http://tinyurl.com/69h8l7x with mods for ByteBuffer
    public static ByteBuffer toByteArray(Object obj) throws IOException {
        byte[] bytes = null;
        ByteArrayOutputStream bos = null;
        ObjectOutputStream oos = null;
        try {
            bos = new ByteArrayOutputStream();
            oos = new ObjectOutputStream(bos);
            oos.writeObject(obj);
            oos.flush();
            bytes = bos.toByteArray();
        } finally {
            if (oos != null) {
                oos.close();
            }
            if (bos != null) {
                bos.close();
            }
        }
        final ByteBuffer value = allocateDirect(bytes.length);
        value.put(bytes).flip();
        return  value;
    }
    

    Here is the getLatestDemoQCBBlock code, last line is where the exception is thrown:

     public QCBBlock getLatestDemoQCBBlock(String id) {
        final ByteBuffer key = allocateDirect(40);
        key.put(id.getBytes(UTF_8)).flip();
        QCBBlock block = null;
        try (Txn<ByteBuffer> txn = demoenv.txnRead()) {
    

    Any explanation?

    opened by davidwynter 28
  • Exception in thread

    Exception in thread "main" java.lang.IllegalArgumentException: Unknown result code 131

    Code is based on snippet from another issue

    import java.io.File;
    import java.nio.ByteBuffer;
    import java.util.concurrent.ThreadLocalRandom;
    
    import org.lmdbjava.Dbi;
    import org.lmdbjava.DbiFlags;
    import org.lmdbjava.Env;
    
    public class Test {
    
        public static void main(String[] args) {
            File path = new File("lmdbTest");
            path.mkdir();
            for (; ; ) {
                writeFull(path);
            }
        }
    
        private static void writeFull(File path) {
            int size = 1;
            int MB = 1024 * 1024 ;//* 1024;
            Env<ByteBuffer> env = Env.create().setMapSize(size * MB).setMaxDbs(1).open(path);
            Dbi<ByteBuffer> db = env.openDbi("test", DbiFlags.MDB_CREATE);
    
            byte[] k = new byte[64];
            ByteBuffer key = ByteBuffer.allocateDirect(64);
            ByteBuffer val = ByteBuffer.allocateDirect(MB/4);
    
            ThreadLocalRandom rnd = ThreadLocalRandom.current();
            int count = 0;
            for (int i = 0; i < 1024*100; i++) {
                try {
                    rnd.nextBytes(k);
                    key.clear();
                    key.put(k).flip();
                    val.clear();
                    db.put(key, val);
                    System.out.println("written " + ++count);
                } catch (Exception e) {
                    //e.printStackTrace(System.out);
                    System.out.println("map full, old size = "+size+" MB");
                    db.close();
                    env.close();
                    size++;
                    env = Env.create().setMapSize(size * MB).setMaxDbs(1).open(path);
                    db = env.openDbi("test", DbiFlags.MDB_CREATE);
                }
            }
    
            System.out.println("closing db");
            db.close();
    
            System.out.println("closing env");
            env.close();
        }
    
    }```
    Exception itself
    ```Exception in thread "main" java.lang.IllegalArgumentException: Unknown result code 131
        at org.lmdbjava.ResultCodeMapper.checkRc(ResultCodeMapper.java:96)
        at org.lmdbjava.Env$Builder.open(Env.java:376)
        at org.lmdbjava.Env$Builder.open(Env.java:388)
        at rhinodog.Run.Test.writeFull(Test.java:48)
        at rhinodog.Run.Test.main(Test.java:18)```
    
    opened by kk00ss 26
  • Exception when using parent transactions

    Exception when using parent transactions

    I can't get parent transactions to work. This is the Scala code I run:

    object ParentTxnProblem extends App {
      val db =  org.lmdbjava.Env.create
      val testFolder = new java.io.File("./test")
      testFolder.mkdir
      val env = db.open(testFolder)
      val parentTxn = env.txnRead
      val childTxn = env.txn(parentTxn, org.lmdbjava.TxnFlags.MDB_RDONLY_TXN)
    }
    

    This is the output of running the above with the current 0.0.5-SNAPSHOT from Sonatype:

    Exception in thread "main" org.lmdbjava.LmdbNativeException$ConstantDerviedException: Platform constant error code: EINVAL (22)
    	at org.lmdbjava.ResultCodeMapper.checkRc(ResultCodeMapper.java:113)
    	at org.lmdbjava.Txn.<init>(Txn.java:73)
    	at org.lmdbjava.Env.txn(Env.java:274)
    	at last line of example code above
    

    Am I doing something wrong or is there an issue with parent transactions? Thanks a lot for your time.

    opened by pstutz 24
  • BadReaderLockException on concurrent read transactions

    BadReaderLockException on concurrent read transactions

    Opening two concurrent read txns gives an error. Here's a unit test to reproduce (for TxnTest.java):

      @Test
      public void readOnlyConcurrentTxnAllowedInReadOnlyEnv() {
        env.openDbi(DB_1, MDB_CREATE);
        final Env<ByteBuffer> roEnv = create().open(path, MDB_NOSUBDIR,
                MDB_RDONLY_ENV);
        Txn<ByteBuffer> txn1 = roEnv.txnRead();
        Txn<ByteBuffer> txn2 = roEnv.txnRead();
        assertThat(txn1, is(notNullValue()));
        assertThat(txn2, is(notNullValue()));
        assertThat(txn1, is(not(sameInstance(txn2))));
        assertThat(txn1.getId(), is(not(txn2.getId())));
        txn1.close();
        txn2.close();
      }
    

    Stacktrace:

    org.lmdbjava.Txn$BadReaderLockException: Invalid reuse of reader locktable slot (-30783)
    
        at org.lmdbjava.ResultCodeMapper.<clinit>(ResultCodeMapper.java:54)
        at org.lmdbjava.Env$Builder.open(Env.java:369)
        at org.lmdbjava.TxnTest.before(TxnTest.java:81)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:497)
        at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
        at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
        at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
        at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24)
        at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
        at org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:48)
        at org.junit.rules.RunRules.evaluate(RunRules.java:20)
        at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
        at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
        at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
        at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
        at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
        at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
        at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
        at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
        at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
        at org.junit.runner.JUnitCore.run(JUnitCore.java:137)
        at com.intellij.junit4.JUnit4IdeaTestRunner.startRunnerWithArgs(JUnit4IdeaTestRunner.java:117)
        at com.intellij.junit4.JUnit4IdeaTestRunner.startRunnerWithArgs(JUnit4IdeaTestRunner.java:42)
        at com.intellij.rt.execution.junit.JUnitStarter.prepareStreamsAndStart(JUnitStarter.java:262)
        at com.intellij.rt.execution.junit.JUnitStarter.main(JUnitStarter.java:84)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:497)
        at com.intellij.rt.execution.application.AppMain.main(AppMain.java:147)
    
    
    Process finished with exit code 255
    
    opened by kamstrup 18
  • Change back to getLong: why ?

    Change back to getLong: why ?

    Hi to all,

    This is more a clarification request than an issue per se:

    • we're currently evaluating lmbdjava and while we didn't experience any problem on OS X and Linux, we had some random and rather frequent crashes on Windows (access violations while retrieving buffers or keys). After investigation, we couldn't really pinpoint anything wrong with our code or discern any general rule as to the crashes' cause (multithreading, memory, etc.). We're using JDK 8 but using other JDKs did not help.

    • we finally managed to reproduce the problem consistently using ByteBufferProxy.PROXY_SAFE, creating a large env size (> 1Gb) and a bit of our code in a JUnit set to 2048m max heap size (letting the default buffer, PROXY_OPTIMAL, makes it much harder to crash, but it crashes eventually -- the same with Agrona buffers or even byte arrays). Our piece of code does not really stress lmdb, nor fills the map or anything extreme.

    • then I saw issue 97 and your change from getLong to getAddress in the buffer proxies, and decided to give it another go. It improved things a lot. Early tests seem to indicate that the problem went away, at least with ByteBufferProxy.PROXY_SAFE.

    Which leads to my question:

    • since we originally had the problem with OPTIMAL proxy versions and you reverted back to using getLong for these, the problem is still there unless we write our own proxy. Which we can do, of course, but I wanted to know if there was a specific issue that made you revert your changes?

    Thanks!

    question 
    opened by altesse 15
  • Platform constant error code: ESRCH No such process (3) under JDK10

    Platform constant error code: ESRCH No such process (3) under JDK10

    I suppose it's worth mentioning. All attempts to run lmdbjava under JDK produce:

    org.lmdbjava.LmdbNativeException$ConstantDerviedException: Platform constant error code: ESRCH No such process (3)
    	at org.lmdbjava.ResultCodeMapper.checkRc(ResultCodeMapper.java:114)
    	at org.lmdbjava.Env$Builder.open(Env.java:458)
    	at org.lmdbjava.Env$Builder.open(Env.java:474)
    

    Not exactly sure what's causing this. It's pretty strange and seems to be unique to Java 10.

    opened by buko 14
  • improve CursorIterator

    improve CursorIterator

    Consider making CursorIterator more extensible. It would be reasonable to be able to subclass to provide a range iterator (i.e. a forward iterator that checks an upper bound key, or a reverse iterator checking a lower bound). Since the class is final and tryToComputeNext is private, this is not currently feasible.

    Another minor point is that the state machine should probably include a CLOSED state (in which hasNext returns false, and repeated calls to close are idempotent).

    opened by phraktle 14
  • NoSuchMethodError on JDK8 with Clojure

    NoSuchMethodError on JDK8 with Clojure

    I'm using lmdbjava for a Clojure project. I don't think Clojure matters for this issue but it's the latest version 1.10. My project was working fine with lmdbjava 0.6.1, but failed when I tried to update to lmdbjava 0.6.2.

    (This is a small bit of Clojure test failure report.) ERROR in (larger-test) (ByteBufferProxy.java:297) Uncaught exception, not in assertion. expected: nil actual: java.lang.NoSuchMethodError: java.nio.ByteBuffer.clear()Ljava/nio/ByteBuffer; at org.lmdbjava.ByteBufferProxy$UnsafeProxy.out (ByteBufferProxy.java:297) org.lmdbjava.ByteBufferProxy$UnsafeProxy.out (ByteBufferProxy.java:260) org.lmdbjava.KeyVal.valOut (KeyVal.java:133) org.lmdbjava.Dbi.reserve (Dbi.java:465)

    I'm using java version "1.8.0_192" Java(TM) SE Runtime Environment (build 1.8.0_192-b12) Java HotSpot(TM) 64-Bit Server VM (build 25.192-b12, mixed mode)

    Interestingly, my Clojure project works on JDK 11 with either lmdbjava 0.6.1 and 0.6.2. I got it to work with JDK 8 by building lmdbjava 0.6.2 and installing it in my local maven repository. It looks like the maven central jar was built with JDK 11 so maybe that explains my problem? I'm guessing that some mvn option to target 1.8 would help, but this is just speculation. If other people can confirm that the maven central jar works with JDK 8, then I will have to investigate the Clojure inter-op problem.

    Would you be willing to target JDK 8 or is that a non-starter?

    opened by miner 13
  • Incremental DB file growth for Windows

    Incremental DB file growth for Windows

    I'm currently using lmdbjava 0.6.0 which uses LMDB 0.9.19 (retrieved via Meta.version()).

    When using Linux setting a large map size is no problem: The database file grows with the size of the database. This is convenient as I can just use a very large value and I don't have to care about database resizing.

    When using Windows (I'm using Windows 7) the database file size is always the same as the mapping size. E.g. when I set a map size of 100GB then the file size will be 100GB, too.

    The LMDB inventor says this behavior has been changed (ITS#8324). The corresponding commit seems to be

    fb5a768 Mon Nov 30 19:46:19 2015 ITS#8324 incremental DB file growth for Windows

    which is from Nov. 30, 2016 and it should be included in LMDB 0.9.19.

    However, the file size behavior under Windows still persists with lmdbjava 0.6.0. So I'm wondering:

    • Did I misunderstand what ITS#8324 is supposed to do?
    • Did lmdbjava bundle the wrong LMDB version?
    • Or does the issue still exist in LMDB?

    Any ideas?

    opened by jubax 13
  • Enumerating all dbs in an environment

    Enumerating all dbs in an environment

    I need to open an environment and discover all of the databases. From what I understand, the database names are stored in the default, nameless db. But what encoding? Creating a db ends in a native method stub passing a String... UTF-8?

        Txn<ByteBuffer> txn = env.txn(null);
        env.openDbi(null).iterate(txn).forEachRemaining(kv -> {
          ByteBuffer keybytes = kv.key();
          String key = ??? 
        });
    

    It would probably be useful to add both the ability to retrieve all dbi names to Env, as well as the ability to return all Dbis in an Env as a Map<String, Dbi>. The LMDB docs suggest that it is good practice to retrieve and re-use these anyway.

    So I suppose this is mostly a question, but also a minor feature request.

    opened by scottcarey 13
  • Unable to make field long java.nio.Buffer.address accessible

    Unable to make field long java.nio.Buffer.address accessible

    Hi! i am tying to use this lib from clojure on java 16.0.1 and got this:

    Unable to make field long java.nio.Buffer.address accessible: module java.base does not "opens java.nio" to unnamed module @3b2bc5e8

    please help!

    opened by faust45 2
  • Add CodeQL workflow for GitHub code scanning

    Add CodeQL workflow for GitHub code scanning

    Hi lmdbjava/lmdbjava!

    This is a one-off automatically generated pull request from LGTM.com :robot:. You might have heard that we’ve integrated LGTM’s underlying CodeQL analysis engine natively into GitHub. The result is GitHub code scanning!

    With LGTM fully integrated into code scanning, we are focused on improving CodeQL within the native GitHub code scanning experience. In order to take advantage of current and future improvements to our analysis capabilities, we suggest you enable code scanning on your repository. Please take a look at our blog post for more information.

    This pull request enables code scanning by adding an auto-generated codeql.yml workflow file for GitHub Actions to your repository — take a look! We tested it before opening this pull request, so all should be working :heavy_check_mark:. In fact, you might already have seen some alerts appear on this pull request!

    Where needed and if possible, we’ve adjusted the configuration to the needs of your particular repository. But of course, you should feel free to tweak it further! Check this page for detailed documentation.

    Questions? Check out the FAQ below!

    FAQ

    Click here to expand the FAQ section

    How often will the code scanning analysis run?

    By default, code scanning will trigger a scan with the CodeQL engine on the following events:

    • On every pull request — to flag up potential security problems for you to investigate before merging a PR.
    • On every push to your default branch and other protected branches — this keeps the analysis results on your repository’s Security tab up to date.
    • Once a week at a fixed time — to make sure you benefit from the latest updated security analysis even when no code was committed or PRs were opened.

    What will this cost?

    Nothing! The CodeQL engine will run inside GitHub Actions, making use of your unlimited free compute minutes for public repositories.

    What types of problems does CodeQL find?

    The CodeQL engine that powers GitHub code scanning is the exact same engine that powers LGTM.com. The exact set of rules has been tweaked slightly, but you should see almost exactly the same types of alerts as you were used to on LGTM.com: we’ve enabled the security-and-quality query suite for you.

    How do I upgrade my CodeQL engine?

    No need! New versions of the CodeQL analysis are constantly deployed on GitHub.com; your repository will automatically benefit from the most recently released version.

    The analysis doesn’t seem to be working

    If you get an error in GitHub Actions that indicates that CodeQL wasn’t able to analyze your code, please follow the instructions here to debug the analysis.

    How do I disable LGTM.com?

    If you have LGTM’s automatic pull request analysis enabled, then you can follow these steps to disable the LGTM pull request analysis. You don’t actually need to remove your repository from LGTM.com; it will automatically be removed in the next few months as part of the deprecation of LGTM.com (more info here).

    Which source code hosting platforms does code scanning support?

    GitHub code scanning is deeply integrated within GitHub itself. If you’d like to scan source code that is hosted elsewhere, we suggest that you create a mirror of that code on GitHub.

    How do I know this PR is legitimate?

    This PR is filed by the official LGTM.com GitHub App, in line with the deprecation timeline that was announced on the official GitHub Blog. The proposed GitHub Action workflow uses the official open source GitHub CodeQL Action. If you have any other questions or concerns, please join the discussion here in the official GitHub community!

    I have another question / how do I get in touch?

    Please join the discussion here to ask further questions and send us suggestions!

    opened by lgtm-com[bot] 0
  • Question: How many configurations (parameters or knobs) are available for users to tune for performance?

    Question: How many configurations (parameters or knobs) are available for users to tune for performance?

    Hi! every contributor, I want to explore the performance potential of lmdbjava, as the title says, and I want to tune the options to improve performance? Which part of the code should I look at? And where is the entry point to the whole lmdbjava?

    opened by jimmy66688 0
  • DirectBufferProxy uses unnecessary concurrent queue for object pool

    DirectBufferProxy uses unnecessary concurrent queue for object pool

    Unnecessary since the queue object is thread local. poll() and offer() are likely more expensive than a simple ArrayDeque which is used in the ByteBufferProxy object pool implementation. Additionally, concurrent queue is fixed size meaning the offer() in DirectBufferProxy.deallocate() can fail abandoning the DirectBuffer to be garbage collected (though perhaps this is desired?) unlike the ArrayDeque used in ByteBufferProxy which resizes it's capacity when offer() is called while full.

    opened by danielcranford 0
Owner
null
Replicate your Key Value Store across your network, with consistency, persistance and performance.

Chronicle Map Version Overview Chronicle Map is a super-fast, in-memory, non-blocking, key-value store, designed for low-latency, and/or multi-process

Chronicle Software : Open Source 2.5k Dec 29, 2022
Immutable key/value store with efficient space utilization and fast reads. They are ideal for the use-case of tables built by batch processes and shipped to multiple servers.

Minimal Perfect Hash Tables About Minimal Perfect Hash Tables are an immutable key/value store with efficient space utilization and fast reads. They a

Indeed Engineering 92 Nov 22, 2022
A gulp of low latency Java

SmoothieMap SmoothieMap is a Map implementation for Java with the lowest memory usage and absence of rehash latency spikes. Under the hood, it is a ve

null 278 Oct 31, 2022
Simple, fast Key-Value storage. Inspired by HaloDB

Phantom Introduction Phantom is an embedded key-value store, provides extreme high write throughput while maintains low latency data access. Phantom w

null 11 Apr 14, 2022
LMDB for Java

LMDB JNI LMDB JNI provide a Java API to LMDB which is an ultra-fast, ultra-compact key-value embedded data store developed by Symas for the OpenLDAP P

deephacks 201 Apr 6, 2022
An embedded database implemented in pure java based on bitcask which is a log-structured hash table for K/V Data.

Baka Db An embedded database implemented in pure java based on bitcask which is a log-structured hash table for K/V Data. Usage import cn.ryoii.baka.B

ryoii 3 Dec 20, 2021
Carbyne Stack tuple store for secure multiparty computation

Carbyne Stack Castor Tuple Store Castor is an open source storage service for cryptographic material used in Secure Multiparty Computation, so called

Carbyne Stack 5 Oct 15, 2022
External-Memory Sorting in Java

Externalsortinginjava External-Memory Sorting in Java: useful to sort very large files using multiple cores and an external-memory algorithm. The vers

Daniel Lemire 235 Dec 29, 2022
Hollow is a java library and toolset for disseminating in-memory datasets from a single producer to many consumers for high performance read-only access.

Hollow Hollow is a java library and toolset for disseminating in-memory datasets from a single producer to many consumers for high performance read-on

Netflix, Inc. 1.1k Dec 25, 2022
Java Collections till the last breadcrumb of memory and performance

Koloboke A family of projects around collections in Java (so far). The Koloboke Collections API A carefully designed extension of the Java Collections

Roman Leventov 967 Nov 14, 2022
Immutable in-memory R-tree and R*-tree implementations in Java with reactive api

rtree In-memory immutable 2D R-tree implementation in java using RxJava Observables for reactive processing of search results. Status: released to Mav

Dave Moten 999 Dec 20, 2022
jproblemgenerator creates scenarios in which Java programs leak memory or crash the JVM

jproblemgenerator creates scenarios in which Java programs leak memory or crash the JVM. It is intended to train the use of debugging tools

null 1 Jan 6, 2022
fasttuple - Collections that are laid out adjacently in both on- and off-heap memory.

FastTuple Introduction There are lots of good things about working on the JVM, like a world class JIT, operating system threads, and a world class gar

BMC TrueSight Pulse (formerly Boundary) 137 Sep 30, 2022
Library for creating In-memory circular buffers that use direct ByteBuffers to minimize GC overhead

Overview This project aims at creating a simple efficient building block for "Big Data" libraries, applications and frameworks; thing that can be used

Tatu Saloranta 132 Jul 28, 2022
A Primitive Collection library that reduces memory usage and improves performance

Primitive-Collections This is a Simple Primitive Collections Library i started as a hobby Project. It is based on Java's Collection Library and FastUt

Speiger 26 Dec 25, 2022
High Performance data structures and utility methods for Java

Agrona Agrona provides a library of data structures and utility methods that are a common need when building high-performance applications in Java. Ma

Real Logic 2.5k Jan 5, 2023
Bloofi: A java implementation of multidimensional Bloom filters

Bloofi: A java implementation of multidimensional Bloom filters Bloom filters are probabilistic data structures commonly used for approximate membersh

Daniel Lemire 71 Nov 2, 2022
A high performance caching library for Java

Caffeine is a high performance, near optimal caching library. For more details, see our user's guide and browse the API docs for the latest release. C

Ben Manes 13k Jan 5, 2023
Chronicle Bytes has a similar purpose to Java NIO's ByteBuffer with many extensions

Chronicle-Bytes Chronicle-Bytes Chronicle Bytes contains all the low level memory access wrappers. It is built on Chronicle Core’s direct memory and O

Chronicle Software : Open Source 334 Jan 1, 2023