MapDB provides concurrent Maps, Sets and Queues backed by disk storage or off-heap-memory. It is a fast and easy to use embedded Java database engine.

Related tags

Database mapdb
Overview

MapDB: database engine

Build Status Maven Central Join the chat at https://gitter.im/jankotek/mapdb

MapDB combines embedded database engine and Java collections. It is free under Apache 2 license. MapDB is flexible and can be used in many roles:

  • Drop-in replacement for Maps, Lists, Queues and other collections.
  • Off-heap collections not affected by Garbage Collector
  • Multilevel cache with expiration and disk overflow.
  • RDBMs replacement with transactions, MVCC, incremental backups etc…
  • Local data processing and filtering. MapDB has utilities to process huge quantities of data in reasonable time.

Hello world

Maven snippet, VERSION is Maven Central

<dependency>
    <groupId>org.mapdb</groupId>
    <artifactId>mapdb</artifactId>
    <version>VERSION</version>
</dependency>

Hello world:

//import org.mapdb.*
DB db = DBMaker.memoryDB().make();
ConcurrentMap map = db.hashMap("map").make();
map.put("something", "here");

You can continue with quick start or refer to the documentation.

Support

More details.

Development

MapDB is written in Kotlin, you will need IntelliJ Idea.

You can use Gradle to build MapDB.

MapDB is extensively unit-tested. By default, only tiny fraction of all tests are executed, so build finishes under 10 minutes. Full test suite has over million test cases and runs for several hours/days. To run full test suite, set -Dmdbtest=1 VM option.

Longer unit tests might require more memory. Use this to increase heap memory assigned to unit tests: -DtestArgLine="-Xmx3G"

By default unit tests are executed in 3 threads. Thread count is controlled by -DtestThreadCount=3 property

On machine with limited memory you can change fork mode so unit test consume less RAM, but run longer: -DtestReuseForks=false

Comments
  • SIGBUS with ~StubRoutines::jlong_disjoint_arraycopy

    SIGBUS with ~StubRoutines::jlong_disjoint_arraycopy

    Using 1.0.6.

    After my fileDB passes 6GB i sometimes get the following JRE crash:

    A fatal error has been detected by the Java Runtime Environment:

    SIGBUS (0x7) at pc=0x00007fb3d904217a, pid=2935, tid=140392178644736

    JRE version: Java(TM) SE Runtime Environment (7.0_60-b19) (build 1.7.0_60-b19) Java VM: Java HotSpot(TM) 64-Bit Server VM (24.60-b09 mixed mode linux-amd64 compressed oops) Problematic frame: v ~StubRoutines::jlong_disjoint_arraycopy

    This happens when calling commit or compact. After restart the store is normally recovered (all i have to do is remove failed compact files).

    DB make code:

            DB dbState = DBMaker
                    .newFileDB(new File(location, "state"))
                    .asyncWriteEnable()
                    .cacheLRUEnable()
                    .cacheSize(100)
                    .mmapFileEnableIfSupported()
                    .closeOnJvmShutdown()
                    .make();
    

    I'm trying it now with mmap disabled, but compacting in this state is painfully slow. Will try with mmap partial (after the current compact run finishes :D )

    bug 
    opened by freakolowsky 30
  • Failed to deserialize SerializerPojo

    Failed to deserialize SerializerPojo

    Here's a little background. I have a DB with 5m records (15GB). Then 53k records are inserted. After insertion I try to reopen DB in read-only mode (on the same thread) and this exception is thrown. I can repeat this every time I do massive insert. Tested on master (519f779a3ae6f0ae5f6c8a4feb4d26602f1b0456) Update: committing and compacting after insert (before re-open) works perfectly however this doesn't fix the situation

    DBMaker dbmaker = DBMaker
            .newFileDB(path.toFile())
            .closeOnJvmShutdown();
            .transactionDisable();
            .cacheSize(32768)
            .mmapFileEnablePartial()
            .cacheLRUEnable()
            .fullChunkAllocationEnable();
    
    db.createTreeMap(MAIN_TREE_MAP)
                .counterEnable()
                .valuesOutsideNodesEnable()
                .makeOrGet();
    
    Exception in thread "main" java.lang.IndexOutOfBoundsException
        at java.nio.Buffer.checkIndex(Buffer.java:532)
        at java.nio.HeapByteBuffer.get(HeapByteBuffer.java:139)
        at org.mapdb.DataInput2.readUnsignedByte(DataInput2.java:74)
        at org.mapdb.DataInput2.unpackInt(DataInput2.java:142)
        at org.mapdb.SerializerBase.deserializeString(SerializerBase.java:802)
        at org.mapdb.DataInput2.readUTF(DataInput2.java:131)
        at org.mapdb.SerializerPojo$1.deserialize(SerializerPojo.java:74)
        at org.mapdb.SerializerPojo$1.deserialize(SerializerPojo.java:39)
        at org.mapdb.Store.deserialize(Store.java:270)
        at org.mapdb.StoreDirect.get2(StoreDirect.java:456)
        at org.mapdb.StoreDirect.get(StoreDirect.java:409)
        at org.mapdb.Store.getSerializerPojo(Store.java:86)
        at org.mapdb.EngineWrapper.getSerializerPojo(EngineWrapper.java:123)
        at org.mapdb.EngineWrapper.getSerializerPojo(EngineWrapper.java:123)
        at org.mapdb.DB.<init>(DB.java:82)
        at org.mapdb.DBMaker.make(DBMaker.java:599)
    
    bug 
    opened by jrumbinas 30
  • Store without Index File

    Store without Index File

    Right now MapDB uses following algorithm to get record:

    1. take recid which is offset in Index File
    2. read 8-byte-long from Index File to get offset and size in Physical File
    3. read data from Physical File
    4. deserialize and so on.

    Index File is needed when physical record location can change (delete, update). However for read-only stores Index File is not necessary and introduces unnecessary seek operation (step 2).

    If Recid would be offset in Physical File, it is possible to eliminate Index File. Such store would only allow two operations: get and insert (append to end of file). This brings challenge with building tree structures, since they need to be constructed from bottom and not from top (no updates are allowed).

    But I believe we could provide utility methods for BTreeMap in Data Pump to construct btree inside such store.

    2.0 
    opened by jankotek 27
  • ArraySerializer with non serializable fields fails

    ArraySerializer with non serializable fields fails

    new description

    MapDB 2.0 tries to store serializers as part of collection definition if Serializer implements Serializable. You are using ArraySerializer, which implements Serializable, but one of the field is your serializer, which is not serializable. And it fails.

    old description

    To increase speed/ space savings with our text-heavy data I want to serialize it as ASCII. When trying out Serializer.STRING_ASCII I quickly discovered that it just calls the UTF string serializer, producing identical result (with 2.0-beta2). So I decided to try and roll my own and improve interface by using CharSequence objects instead of Strings, which could represent any other number of string representation without the need of converting them / generating GC. My code is below:

        public static final Serializer<CharSequence> ASCII_SERIALIZER = new Serializer<CharSequence>() {
            public void serialize(DataOutput out, CharSequence value) throws IOException {
                int SIZE = value.length();
                DataIO.packInt(out, SIZE);
                for (int i = 0; i < SIZE; i++) {
                    out.writeByte((byte)value.charAt(i));
                }
            }
            public CharSequence deserialize(DataInput in, int available) throws IOException {
                int SIZE = DataIO.unpackInt(in);
                StringBuilder result = new StringBuilder(SIZE);
                for (int i = 0; i < SIZE; i++) {
                    result.append((char)(in.readByte() & 0xFF));
                }
                return result;
            }
            public int fixedSize() {
                return -1;
            }
        };
    

    Unfortunately the code would not run because MapDB tries to assert that the object is Serializable and the CharSequence interface is not: Exception in thread "main" java.io.IOError: java.io.NotSerializableException . Why is this kind of check even necessary? As long as serializer promises to be able to properly handle CharSequence why shouldn't MapDB let it?

    Then I've changed my code to use String (and add a wasteful conversion step from StringBuilder to String), but the error remained. Why doesn't MapDB support serializing CharSequences (and maybe other interfaces) natively, as well as writing custom serializers for basic types like String?

    NOTE: I am using this serializer as part of a composite BTree key, like in the examples

    Caused by: java.io.NotSerializableException: com.test.XXXX
            at org.mapdb.SerializerPojo.assertClassSerializable(SerializerPojo.java:370)
            at org.mapdb.SerializerPojo.serializeUnknownObject(SerializerPojo.java:433)
            at org.mapdb.SerializerBase.serialize(SerializerBase.java:985)
            at org.mapdb.SerializerBase$17.serialize(SerializerBase.java:371)
            at org.mapdb.SerializerBase$17.serialize(SerializerBase.java:362)
            at org.mapdb.SerializerBase.serialize(SerializerBase.java:980)
            at org.mapdb.SerializerBase.serialize(SerializerBase.java:931)
            at org.mapdb.Serializer.valueArraySerialize(Serializer.java:1714)
            at org.mapdb.BTreeMap$NodeSerializer.serialize(BTreeMap.java:766)
            at org.mapdb.BTreeMap$NodeSerializer.serialize(BTreeMap.java:702)
            at org.mapdb.Store.serialize(Store.java:270)
            ... 7 more
    
    bug 2.0 
    opened by dmk23 21
  • infinit loop when HashMap iteration

    infinit loop when HashMap iteration

    Hi, Just to inform you about a bug. When i use mapdb and want to iterate over key values, i fell in an infinit loop. My "for each" statement return me the same object key again and again (same system id). I profile it and and i see all the time is consume by next, moveToNext and advance at line 972 of file HTreeMap. The advance method return always new list again and again. After that, i must delete my mapDB file if i want to run correctly my program. I hope you find it and try to reproduct it.

    Sincerly,

    bug 
    opened by OBOne 19
  • worse performance with transactionDisable

    worse performance with transactionDisable

    Hi, I try to fine tune my MapDB config (2.0.0-SNAPSHOT) but I am a bit puzzled as the best config so far is no config.

    my use case is that I write several maps in parallel by indexing values from a CSV file. Here is the time for a default config: The maps are tree maps with no particular configurations. (it is a Map<Integer,String> and Map<String,Integer>) this benchmark show how long it took to process each 1000 lines.

    1000;214 2000;101 3000;153 4000;151 5000;194 6000;258 7000;294 8000;327 9000;398 10000;436 11000;578 12000;426 13000;454 14000;486 15000;526 16000;556 17000;597 18000;633 19000;684 20000;721 21000;773 22000;839 23000;909 24000;958 25000;1002 26000;1105 27000;1138 28000;1151 29000;1191 30000;1182 31000;1240 32000;1280

    now here are the benchmark with transaction disabled 1000;778 2000;709 3000;555 4000;611 5000;653 6000;2242 7000;4884 8000;7633 9000;10653 10000;13572 11000;16610 12000;16653 13000;16653 14000;16628 ... [it takes ages after that] ... I still think if there is a way for me to use data pump as I know how quicker it is but right now I am stuck with regular put(key,value).

    Is it normal that disabling transactions impacts the performance so badly? I would have thought if should improve the performance instead.

    enhancement 2.0 
    opened by adridadou 18
  • Bug with Serialization

    Bug with Serialization

    take a look a this error. but everything was fine... it's strange.

    Exception occurred in target VM: Could not instantiate class 
    java.lang.RuntimeException: Could not instantiate class
        at org.mapdb.SerializerPojo.deserializeUnknownHeader(SerializerPojo.java:483)
        at org.mapdb.SerializerBase.deserialize3(SerializerBase.java:1216)
        at org.mapdb.SerializerBase.deserialize(SerializerBase.java:1132)
        at org.mapdb.SerializerBase.deserialize(SerializerBase.java:867)
        at org.mapdb.SerializerPojo.deserialize(SerializerPojo.java:701)
        at org.mapdb.HTreeMap$2.deserialize(HTreeMap.java:135)
        at org.mapdb.HTreeMap$2.deserialize(HTreeMap.java:121)
        at org.mapdb.Store.deserialize(Store.java:297)
        at org.mapdb.StoreDirect.get2(StoreDirect.java:475)
        at org.mapdb.StoreWAL.get2(StoreWAL.java:368)
        at org.mapdb.StoreWAL.get(StoreWAL.java:352)
        at org.mapdb.Caches$HashTable.get(Caches.java:245)
        at org.mapdb.EngineWrapper.get(EngineWrapper.java:58)
        at org.mapdb.HTreeMap.recursiveDirCount(HTreeMap.java:350)
        at org.mapdb.HTreeMap.recursiveDirCount(HTreeMap.java:345)
        at org.mapdb.HTreeMap.sizeLong(HTreeMap.java:325)
        at org.mapdb.HTreeMap.size(HTreeMap.java:305)
        at com.highdeveloper.helperj.Test.main(Test.java:20)
    Caused by: java.lang.IndexOutOfBoundsException: Index: 11, Size: 11
        at java.util.ArrayList.rangeCheck(ArrayList.java:638)
        at java.util.ArrayList.get(ArrayList.java:414)
        at org.mapdb.SerializerPojo.deserializeUnknownHeader(SerializerPojo.java:476)
        ... 17 more
    
    bug 2.0 android 
    opened by csilvav 17
  • db file load  cost too much time

    db file load cost too much time

    Hi, (1) I have a db file, which has the keys: private HTreeMap<String,List<Integer>> wordIdentityMap = null; private HTreeMap<String,List<Integer>> phraseAlignmentMap = null; the List<Integer> is an array with a length no more than 50,000 ; total db file is about 11G in size

    (2) db initial code: db = DBMaker.fileDB(new File(modelPath)) .cacheSize(cacheSize) //// 50,000 .allocateStartSize(StartSize) /// 100000 .allocateIncrement(increment) //// 10000 .cacheLRUEnable() .make(); wordIdentityScoreMap = db.hashMap("wordIdentityScoreMap"); phraseAlignmentMap = db.hashMap("phraseAlignmentMap"); long total = semanticIds.size() + sentenceIds.size() + sent2semIndexer.size() + posDeletionScoreMap.size(); ///////////////////// costs very long time to go to here !! long counter = 0; ` (3) problems: it costs lots of time to load this db file. In fact I wait for almost 2 hours, and give up! most of the time, the CPU is hanged up, CPU usage is very low...... what is more, it costs lots of time to generate this db file too (almost 1 day to run db.commit). Is there any way to solve this problem?

    opened by smartnlp 16
  • ArrayIndexOutOfBoundsException with BTreeMap.put using v0.9.11

    ArrayIndexOutOfBoundsException with BTreeMap.put using v0.9.11

    I updated to MapDB 0.9.11 and I'm now getting a ArrayIndexOutOfBoundsException on BTreeMap.put after adding several entries. Looking at the MapDB file while adding entries it never exceed the size of 16 MB, whereas it was growing bigger before (~800MB).

    I'm running on Win7 64bit, Java JDK 1.7.0_25 using DBMaker.newFileDB to create the DB. The same error happens also if I disable the cache or the asyncWrite feature. I will try to create a test case, but maybe this helps already.

    java.lang.ArrayIndexOutOfBoundsException: 147448
        at org.mapdb.Volume$ByteBufferVol.getLong(Volume.java:327)
        at org.mapdb.StoreDirect.get2(StoreDirect.java:440)
        at org.mapdb.StoreDirect.get(StoreDirect.java:428)
        at org.mapdb.EngineWrapper.get(EngineWrapper.java:60)
        at org.mapdb.AsyncWriteEngine.get(AsyncWriteEngine.java:399)
        at org.mapdb.Caches$HashTable.get(Caches.java:230)
        at org.mapdb.BTreeMap.put2(BTreeMap.java:664)
        at org.mapdb.BTreeMap.put(BTreeMap.java:644)
    

    The ArrayIndexOutOfBoundsException is always at different sizes.

    bug 
    opened by fmannhardt 16
  • Issues with StoreWAL / TreeMap with 0.9.9

    Issues with StoreWAL / TreeMap with 0.9.9

    I keep getting errors like the following after upgrading to 0.9.9; they usually seem to happen after cycling the JVM a couple of times. I am running on Java 8. I'll try Java 7 for a while and see if they reoccur, although it's unlikely to be a cause.

    My setup is quite simple: db = DBMaker.newFileDB(dbFile) .closeOnJvmShutdown() .make();

    ! java.io.IOError: java.io.IOException: Zero Header, data corrupted ! at org.mapdb.SerializerBase.deserialize(SerializerBase.java:825) ! at org.mapdb.SerializerBase.deserialize(SerializerBase.java:811) ! at org.mapdb.BTreeMap$NodeSerializer.deserialize(BTreeMap.java:451) ! at org.mapdb.BTreeMap$NodeSerializer.deserialize(BTreeMap.java:288) ! at org.mapdb.Store.deserialize(Store.java:270) ! at org.mapdb.StoreDirect.get2(StoreDirect.java:468) ! at org.mapdb.StoreWAL.get2(StoreWAL.java:347) ! at org.mapdb.StoreWAL.get(StoreWAL.java:331) ! at org.mapdb.Caches$HashTable.get(Caches.java:230) ! at org.mapdb.BTreeMap.(BTreeMap.java:542) ! at org.mapdb.DB.getTreeMap(DB.java:778)

    bug 
    opened by flavor8 16
  • Application hangs with latest version of mapdb 3.0.0-M6

    Application hangs with latest version of mapdb 3.0.0-M6

    Simple application using following dependency hangs up in classloader. The dependency is below -- org.mapdb mapdb 3.0.0-M6 The sample code is below - DB db = DBMaker.fileDB("file.db") // TODO memory mapped files enable here .make(); ConcurrentMap<String, String> map = db.hashMap("map", Serializer.STRING, Serializer.STRING).make(); for(int i=0; i < 100; i++) { String id = UUID.randomUUID().toString(); map.put(id, id+(i*10)); System.out.println("Writing"); } db.close();

    There are no exceptions or anything thrown either. I am running this in eclipse.

    opened by amitvc 15
  • Fun.HI() ClassCastException

    Fun.HI() ClassCastException

    java.lang.ClassCastException: class org.mapdb.Fun$3 cannot be cast to class java.math.BigDecimal (org.mapdb.Fun$3 is in unnamed module of loader 'app'; java.math.BigDecimal is in module java.base of loader 'bootstrap')

    on Fun.HI()

    protected NavigableMap<Tuple3<Long, BigDecimal, Long>, byte[]> assetKeyMap;
    
                this.assetKeyMap = database.createTreeMap("balances_key_2_asset_bal_address")
                        .comparator(new Fun.Tuple3Comparator<>(Fun.COMPARATOR, Fun.COMPARATOR, Fun.COMPARATOR))
                        .makeOrGet();
    
    
            Iterator<byte[]> iter = this.assetKeyMap.subMap(
                    Fun.t3(assetKey, fromOwnAmount, Long.MIN_VALUE),
                    Fun.t3(assetKey, Fun.HI(), Long.MAX_VALUE)).values().iterator();
    
    

    I use both version in two project

    implementation group: 'org.mapdb', name: 'mapdb', version: '1.0.7'
    //implementation group: 'org.mapdb', name: 'mapdb', version: '1.0.9'
    

    in one project all work in another - error ((

    opened by icreator 1
  • mapdb 3.0.8 fails to fileLoad mapDB file when readOnly set

    mapdb 3.0.8 fails to fileLoad mapDB file when readOnly set

    fileLoad() of mapDBfile fails when DBMaker is set to readOnly, fileLoad() succeeds if not readOnly

    java.lang.UnsupportedOperationException: null
        at java.nio.MappedByteBuffer.checkMapped(MappedByteBuffer.java:96) ~[?:1.8.0_302]
        at java.nio.MappedByteBuffer.load(MappedByteBuffer.java:156) ~[?:1.8.0_302]
        at org.mapdb.volume.MappedFileVolSingle.fileLoad(MappedFileVolSingle.java:173) ~[mapdb-3.0.8.jar:?]
        at org.mapdb.StoreDirect.fileLoad(StoreDirect.kt:1112) ~[mapdb-3.0.8.jar:?]
        at <our code that calls db_.getStore().fileLoad();   see below>
    

    our calling code:

    //it used STRING & JAVA for key & value serializers respectively to create the file, and the file was committed & closed                     
                        DB db = DBMaker
                                .fileDB(file)
                                .fileMmapEnableIfSupported()
                                .fileMmapPreclearDisable()
                                .fileChannelEnable()
                                .cleanerHackEnable()
                                .closeOnJvmShutdown()
                                .readOnly()    //only fails when this is uncommented, seems you cant load a file if you set it to readOnly
                                .make();
    
                        db.getStore().fileLoad();  <-- fails here
    

    http://hg.openjdk.java.net/jdk8/jdk8/jdk/file/687fd7c7986d/src/share/classes/java/nio/MappedByteBuffer.java the comment is in the java source, and it is exactly what the mapdb code is doing

        private void checkMapped() {
            if (fd == null)
                // Can only happen if a luser explicitly casts a direct byte buffer
                throw new UnsupportedOperationException();
        }
    

    https://github.com/jankotek/mapdb/blob/release-3.0/src/main/java/org/mapdb/volume/MappedFileVolSingle.java

    ...Constructor:
                buffer = raf.getChannel().map(mapMode, 0, maxSize);    //this returns a  DirectByteBuffer extends MappedByteBuffer
                                                              //but its call to super() does seem to set the fd for java 8, or is done at the asReadOnlyBuffer copy
                if (readOnly)
                    buffer = buffer.asReadOnlyBuffer();   //this calls an ByteBuffer abstract method, but the caller never sets the fd value (MappedByteBuffer has 2 constructors, one that sets fd & one that doesnt, control flow is obscured but its calling the one where fd=null, thus is not mapped
    ...
        @Override
        public boolean fileLoad() {
            ((MappedByteBuffer) buffer).load();
            return true;
        }
    
    opened by stevenwernercs 0
  • Dependencies fail on maven - Guava

    Dependencies fail on maven - Guava

    Hi, Trying to compile today on maven : org.mapdb mapdb 3.0.8

    a year ago, same project worked ok, but now i get de follow error:

    [ERROR] Failed to execute goal on project xxx: Could not resolve dependencies for project xxx: Failed to collect dependencies at org.mapdb:mapdb:jar:3.0.8 -> com.google.guava:guava:jar:31.1.0.redhat-00001: Failed to read artifact descriptor for com.google.guava:guava:jar:31.1.0.redhat-00001

    opened by eriera 2
  • NPE in ByteBufferVol which appears to corrupt header

    NPE in ByteBufferVol which appears to corrupt header

    We got this exception:

    java.lang.NullPointerException at org.mapdb.volume.ByteBufferVol.getSlice(ByteBufferVol.java:42) at org.mapdb.volume.ByteBufferVol.getLong(ByteBufferVol.java:121) at org.mapdb.StoreDirect.longStackTake(StoreDirect.kt:389) at org.mapdb.StoreDirectAbstract.allocateData(StoreDirectAbstract.kt:293) at org.mapdb.StoreDirect.put(StoreDirect.kt:618) at org.mapdb.HTreeMap.valueWrap(HTreeMap.kt:1208) at org.mapdb.HTreeMap.putprotected(HTreeMap.kt:344) at org.mapdb.HTreeMap.put(HTreeMap.kt:324)

    This is similar to the exception reported in #963 but not identical.

    This appears to have corrupted the MapDB header as upon restart we got

    java[11865]: at org.mapdb.DBMaker$Maker.make(DBMaker.kt:450) java[11865]: at org.mapdb.StoreDirect$Companion.make$default(StoreDirect.kt:56) java[11865]: at org.mapdb.StoreDirect$Companion.make(StoreDirect.kt:57) java[11865]: at org.mapdb.StoreDirect.(StoreDirect.kt:114) java[11865]: at org.mapdb.StoreDirectAbstract.fileHeaderCheck(StoreDirectAbstract.kt:113) java[11865]: Exception in thread "main" org.mapdb.DBException$DataCorruption: Header checksum broken. Sto

    (This is inverted because it's from a systemctl log.)

    We are using MapDB 3.0.8 and it is instantiated like so:

            mapDB = DBMaker
                .fileDB(mapDBFile)
                .closeOnJvmShutdown()
                .fileMmapEnable()
                .fileMmapPreclearDisable()
                .make();
    

    Transactions are not enabled because we need to be able to shrink the file. However, we

    1. Always call commit() after changing the map (there is only one map).
    2. Always change the map inside a Guava Monitor so two threads cannot change it at the same time.
    opened by dan-blum 0
Owner
Jan Kotek
MapDB author
Jan Kotek
Accumulo backed time series database

Timely is a time series database application that provides secure access to time series data. Timely is written in Java and designed to work with Apac

National Security Agency 367 Oct 11, 2022
Transactional schema-less embedded database used by JetBrains YouTrack and JetBrains Hub.

JetBrains Xodus is a transactional schema-less embedded database that is written in Java and Kotlin. It was initially developed for JetBrains YouTrack

JetBrains 1k Dec 14, 2022
Transactional schema-less embedded database used by JetBrains YouTrack and JetBrains Hub.

JetBrains Xodus is a transactional schema-less embedded database that is written in Java and Kotlin. It was initially developed for JetBrains YouTrack

JetBrains 858 Mar 12, 2021
Easy-es - easy use for elastich search

Born To Simplify Development What is Easy-Es? Easy-Es is a powerfully enhanced toolkit of RestHighLevelClient for simplify development. This toolkit p

null 777 Jan 6, 2023
BenDB - An fastest, qualified & easy to use multi database library

BenDB - An fastest, qualified & easy to use multi database library

Fitchle 2 May 3, 2022
A simple-to-use storage ORM supporting several databases for Java.

Storage Handler This is a library based off of my old storage handler within my queue revamp. It's for easy storage handling for multiple platforms. N

NV6 7 Jun 22, 2022
Vibur DBCP - concurrent and dynamic JDBC connection pool

Vibur DBCP is concurrent, fast, and fully-featured JDBC connection pool, which provides advanced performance monitoring capabilities, including slow S

Vibur 94 Apr 20, 2022
MariaDB Embedded in Java JAR

What? MariaDB4j is a Java (!) "launcher" for MariaDB (the "backward compatible, drop-in replacement of the MySQL(R) Database Server", see FAQ and Wiki

Michael Vorburger ⛑️ 720 Jan 4, 2023
Flyway by Redgate • Database Migrations Made Easy.

Flyway by Redgate Database Migrations Made Easy. Evolve your database schema easily and reliably across all your instances. Simple, focused and powerf

Flyway by Boxfuse 6.9k Jan 9, 2023
Flyway by Redgate • Database Migrations Made Easy.

Flyway by Redgate Database Migrations Made Easy. Evolve your database schema easily and reliably across all your instances. Simple, focused and powerf

Flyway by Boxfuse 6.9k Jan 5, 2023
Fast scalable time series database

KairosDB is a fast distributed scalable time series database written on top of Cassandra. Documentation Documentation is found here. Frequently Asked

null 1.7k Dec 17, 2022
Clickhouse storage backend for Janusgraph

Clickhouse storage backend for Janusgraph Overview Clickhouse implementation of Janusgraph storage backend. Features New version 0.6.1 of JanusGraph c

null 3 Nov 30, 2022
Benchmark App to compare different storage libraries (MMKV, AsyncStorage, WatermelonDB, RealmDB, SQLite)

Storage Benchmarks This is a benchmark app to compare popular storage solutions for React Native. It's running React Native 0.68, with Hermes enabled.

Marc Rousavy 25 Dec 15, 2022
A distributed in-memory data store for the cloud

EVCache EVCache is a memcached & spymemcached based caching solution that is mainly used for AWS EC2 infrastructure for caching frequently used data.

Netflix, Inc. 1.9k Jan 2, 2023
Core ORMLite functionality that provides a lite Java ORM in conjunction with ormlite-jdbc or ormlite-android

ORMLite Core This package provides the core functionality for the JDBC and Android packages. Users that are connecting to SQL databases via JDBC shoul

Gray 547 Dec 25, 2022
Provides many useful CRUD, Pagination, Sorting operations with Thread-safe Singleton support through the native JDBC API.

BangMapleJDBCRepository Inspired by the JpaRepository of Spring framework which also provides many capabilities for the CRUD, Pagination and Sorting o

Ngô Nguyên Bằng 5 Apr 7, 2022
Java implementation of Condensation - a zero-trust distributed database that ensures data ownership and data security

Java implementation of Condensation About Condensation enables to build modern applications while ensuring data ownership and security. It's a one sto

CondensationDB 43 Oct 19, 2022