Port of LevelDB to Java

Overview

LevelDB in Java

This is a rewrite (port) of LevelDB in Java. This goal is to have a feature complete implementation that is within 10% of the performance of the C++ original and produces byte-for-byte exact copies of the C++ code.

Current status

Currently the code base is basically functional, but only trivially tested. In some places, this code is a literal conversion of the C++ code and in others it has been converted to a more natural Java style. The plan is to leave the code closer to the C++ original until the baseline performance has been established.

API Usage:

Recommended Package imports:

import org.iq80.leveldb.*;
import static org.iq80.leveldb.impl.Iq80DBFactory.*;
import java.io.*;

Opening and closing the database.

Options options = new Options();
options.createIfMissing(true);
DB db = factory.open(new File("example"), options);
try {
  // Use the db in here....
} finally {
  // Make sure you close the db to shutdown the 
  // database and avoid resource leaks.
  db.close();
}

Putting, Getting, and Deleting key/values.

db.put(bytes("Tampa"), bytes("rocks"));
String value = asString(db.get(bytes("Tampa")));
db.delete(bytes("Tampa"), wo);

Performing Batch/Bulk/Atomic Updates.

WriteBatch batch = db.createWriteBatch();
try {
  batch.delete(bytes("Denver"));
  batch.put(bytes("Tampa"), bytes("green"));
  batch.put(bytes("London"), bytes("red"));

  db.write(batch);
} finally {
  // Make sure you close the batch to avoid resource leaks.
  batch.close();
}

Iterating key/values.

DBIterator iterator = db.iterator();
try {
  for(iterator.seekToFirst(); iterator.hasNext(); iterator.next()) {
    String key = asString(iterator.peekNext().getKey());
    String value = asString(iterator.peekNext().getValue());
    System.out.println(key+" = "+value);
  }
} finally {
  // Make sure you close the iterator to avoid resource leaks.
  iterator.close();
}

Working against a Snapshot view of the Database.

ReadOptions ro = new ReadOptions();
ro.snapshot(db.getSnapshot());
try {
  
  // All read operations will now use the same 
  // consistent view of the data.
  ... = db.iterator(ro);
  ... = db.get(bytes("Tampa"), ro);

} finally {
  // Make sure you close the snapshot to avoid resource leaks.
  ro.snapshot().close();
}

Using a custom Comparator.

DBComparator comparator = new DBComparator(){
    public int compare(byte[] key1, byte[] key2) {
        return new String(key1).compareTo(new String(key2));
    }
    public String name() {
        return "simple";
    }
    public byte[] findShortestSeparator(byte[] start, byte[] limit) {
        return start;
    }
    public byte[] findShortSuccessor(byte[] key) {
        return key;
    }
};
Options options = new Options();
options.comparator(comparator);
DB db = factory.open(new File("example"), options);

Disabling Compression

Options options = new Options();
options.compressionType(CompressionType.NONE);
DB db = factory.open(new File("example"), options);

Configuring the Cache

Options options = new Options();
options.cacheSize(100 * 1048576); // 100MB cache
DB db = factory.open(new File("example"), options);

Getting approximate sizes.

long[] sizes = db.getApproximateSizes(new Range(bytes("a"), bytes("k")), new Range(bytes("k"), bytes("z")));
System.out.println("Size: "+sizes[0]+", "+sizes[1]);

Getting database status.

String stats = db.getProperty("leveldb.stats");
System.out.println(stats);

Getting informational log messages.

Logger logger = new Logger() {
  public void log(String message) {
    System.out.println(message);
  }
};
Options options = new Options();
options.logger(logger);
DB db = factory.open(new File("example"), options);

Destroying a database.

Options options = new Options();
factory.destroy(new File("example"), options);

Projects using this port of LevelDB

  • ActiveMQ Apollo: Defaults to using leveldbjni, but falls back to this port if the jni port is not available on your platform.
Comments
  • error when a large amount of

    error when a large amount of

    create a database add 4 billion rows with "for ()"

    when the database size reaches 1 Gb or number of lines ~ 21692085

    java.lang.RuntimeException: Could not open table 2700
    at org.iq80.leveldb.impl.TableCache.getTable(TableCache.java:95)
    at org.iq80.leveldb.impl.TableCache.newIterator(TableCache.java:77)
    at org.iq80.leveldb.impl.DbImpl.finishCompactionOutputFile(DbImpl.java:1112)
    at org.iq80.leveldb.impl.DbImpl.doCompactionWork(DbImpl.java:1035)
    at org.iq80.leveldb.impl.DbImpl.backgroundCompaction(DbImpl.java:444)
    at org.iq80.leveldb.impl.DbImpl.backgroundCall(DbImpl.java:395)
    at org.iq80.leveldb.impl.DbImpl.access$100(DbImpl.java:79)
    at org.iq80.leveldb.impl.DbImpl$2.call(DbImpl.java:370)
    at org.iq80.leveldb.impl.DbImpl$2.call(DbImpl.java:364)
    at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334)
    at java.util.concurrent.FutureTask.run(FutureTask.java:166)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
    at java.lang.Thread.run(Thread.java:722)
    Caused by: java.io.IOException: Map failed
    at sun.nio.ch.FileChannelImpl.map(FileChannelImpl.java:849)
    at org.iq80.leveldb.table.Table.<init>(Table.java:63)
    at org.iq80.leveldb.impl.TableCache$TableAndFile.<init>(TableCache.java:118)
    at org.iq80.leveldb.impl.TableCache$TableAndFile.<init>(TableCache.java:105)
    at org.iq80.leveldb.impl.TableCache$1.load(TableCache.java:65)
    at org.iq80.leveldb.impl.TableCache$1.load(TableCache.java:61)
    at com.google.common.cache.CustomConcurrentHashMap$ComputingValueReference.compute(CustomConcurrentHashMap.java:3426)
    at com.google.common.cache.CustomConcurrentHashMap$Segment.compute(CustomConcurrentHashMap.java:2322)
    at com.google.common.cache.CustomConcurrentHashMap$Segment.getOrCompute(CustomConcurrentHashMap.java:2291)
    at com.google.common.cache.CustomConcurrentHashMap.getOrCompute(CustomConcurrentHashMap.java:3802)
    at com.google.common.cache.ComputingCache.get(ComputingCache.java:46)
    at org.iq80.leveldb.impl.TableCache.getTable(TableCache.java:88)
    ... 13 more
    Caused by: java.lang.OutOfMemoryError: Map failed
    at sun.nio.ch.FileChannelImpl.map0(Native Method)
    at sun.nio.ch.FileChannelImpl.map(FileChannelImpl.java:846)
    ... 24 more 
    java.lang.RuntimeException: Could not open table 2702
    at org.iq80.leveldb.impl.TableCache.getTable(TableCache.java:95)
    at org.iq80.leveldb.impl.TableCache.newIterator(TableCache.java:77)
    at org.iq80.leveldb.impl.TableCache.newIterator(TableCache.java:72)
    at org.iq80.leveldb.impl.DbImpl.buildTable(DbImpl.java:936)
    at org.iq80.leveldb.impl.DbImpl.writeLevel0Table(DbImpl.java:881)
    at org.iq80.leveldb.impl.DbImpl.compactMemTableInternal(DbImpl.java:847)
    at org.iq80.leveldb.impl.DbImpl.backgroundCompaction(DbImpl.java:421)
    at org.iq80.leveldb.impl.DbImpl.backgroundCall(DbImpl.java:395)
    at org.iq80.leveldb.impl.DbImpl.access$100(DbImpl.java:79)
    at org.iq80.leveldb.impl.DbImpl$2.call(DbImpl.java:370)
    at org.iq80.leveldb.impl.DbImpl$2.call(DbImpl.java:364)
    at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334)
    at java.util.concurrent.FutureTask.run(FutureTask.java:166)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
    at java.lang.Thread.run(Thread.java:722)
    Caused by: java.io.IOException: Map failed
    at sun.nio.ch.FileChannelImpl.map(FileChannelImpl.java:849)
    at org.iq80.leveldb.table.Table.<init>(Table.java:63)
    at org.iq80.leveldb.impl.TableCache$TableAndFile.<init>(TableCache.java:118)
    at org.iq80.leveldb.impl.TableCache$TableAndFile.<init>(TableCache.java:105)
    at org.iq80.leveldb.impl.TableCache$1.load(TableCache.java:65)
    at org.iq80.leveldb.impl.TableCache$1.load(TableCache.java:61)
    at com.google.common.cache.CustomConcurrentHashMap$ComputingValueReference.compute(CustomConcurrentHashMap.java:3426)
    at com.google.common.cache.CustomConcurrentHashMap$Segment.compute(CustomConcurrentHashMap.java:2322)
    at com.google.common.cache.CustomConcurrentHashMap$Segment.getOrCompute(CustomConcurrentHashMap.java:2291)
    at com.google.common.cache.CustomConcurrentHashMap.getOrCompute(CustomConcurrentHashMap.java:3802)
    at com.google.common.cache.ComputingCache.get(ComputingCache.java:46)
    at org.iq80.leveldb.impl.TableCache.getTable(TableCache.java:88)
    ... 15 more
    Caused by: java.lang.OutOfMemoryError: Map failed
    at sun.nio.ch.FileChannelImpl.map0(Native Method)
    at sun.nio.ch.FileChannelImpl.map(FileChannelImpl.java:846)
    ... 26 more
    

    With a large amount of base, after executing the commands

        db.close();
        db = factory.open(patch, options);
    

    flying bug

    Exception in thread "main" java.io.FileNotFoundException: leveldb\db\000015.sst (The requested operation can not be executed for a file with a user-mapped section open)
    

    Translate by Google*

    opened by hitman249 16
  • Database size limit?

    Database size limit?

    While experimenting with using level db 0.10 to process some large dataset, I ran across this exception:

    Exception in thread "main" org.iq80.leveldb.impl.DbImpl$BackgroundProcessingException: java.lang.NullPointerException at org.iq80.leveldb.impl.DbImpl.checkBackgroundException(DbImpl.java:421) at org.iq80.leveldb.impl.DbImpl.writeInternal(DbImpl.java:683) at org.iq80.leveldb.impl.DbImpl.put(DbImpl.java:649) at org.iq80.leveldb.impl.DbImpl.put(DbImpl.java:642) at com.locoslab.library.osm.importer.LevelSink.process(LevelSink.java:151) at crosby.binary.osmosis.OsmosisBinaryParser.parseDense(OsmosisBinaryParser.java:138) at org.openstreetmap.osmosis.osmbinary.BinaryParser.parse(BinaryParser.java:124) at org.openstreetmap.osmosis.osmbinary.BinaryParser.handleBlock(BinaryParser.java:68) at org.openstreetmap.osmosis.osmbinary.file.FileBlock.process(FileBlock.java:135) at org.openstreetmap.osmosis.osmbinary.file.BlockInputStream.process(BlockInputStream.java:34) at crosby.binary.osmosis.OsmosisReader.run(OsmosisReader.java:45) at com.locoslab.library.osm.importer.App.main(App.java:27) Caused by: java.lang.NullPointerException at org.iq80.leveldb.impl.Compaction.totalFileSize(Compaction.java:129) at org.iq80.leveldb.impl.Compaction.isTrivialMove(Compaction.java:122) at org.iq80.leveldb.impl.DbImpl.backgroundCompaction(DbImpl.java:480) at org.iq80.leveldb.impl.DbImpl.backgroundCall(DbImpl.java:436) at org.iq80.leveldb.impl.DbImpl.access$100(DbImpl.java:85) at org.iq80.leveldb.impl.DbImpl$2.call(DbImpl.java:404) at org.iq80.leveldb.impl.DbImpl$2.call(DbImpl.java:398) at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264) at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) at java.base/java.lang.Thread.run(Thread.java:834)

    It seems to be caused by some limitation that has to do with the size of the db. I am unsure whether it is the number of files or the size of the db itself. Furthermore, it seems quite possible that it is some limitation of the JVM that I am using.

    Yet, I am sure that it has nothing to do with the number of entries. In my first test, I was able to store 2.8 billion entires. After using a more compact data representation, I was able to increase the number of entries to 3.8 billion. However, when the db reaches about 240GB, this exception is raised deterministically on my Windows 10 machine running OpenJDK11.

    It is not a big problem for me, since I can easily partition my data to be stored in multiple level dbs. I just wanted to bring it to your attention. Thank you for providing this great tool.

    opened by locosmac 14
  • Table.openBlock fails on Windows 7 but works on Debian

    Table.openBlock fails on Windows 7 but works on Debian

    The following code (using release 0.11):

       public ChainstateIterator() {
            Options options = new Options();
            options.createIfMissing(false);
            options.compressionType(CompressionType.NONE);
            String datadir = treasure.getDataDir();
    
            try {
                db = factory.open(new File(datadir + "chainstate" + File.separator), options);
                obfuscationKey = extractKey(db);
                System.out.println(new Bytes(obfuscationKey));
                levelDBIterator = db.iterator();
                levelDBIterator.seekToFirst();
            } catch (IOException e) {
                ...
            }
        }
    

    works perfectly on Debian Stretch, amd64 but fails with:

    Exception in thread "main" java.lang.RuntimeException: java.nio.channels.ClosedChannelException
    	at com.google.common.base.Throwables.propagate(Throwables.java:241)
    	at org.iq80.leveldb.table.Table.openBlock(Table.java:83)
    	at org.iq80.leveldb.util.TableIterator.getNextBlock(TableIterator.java:102)
    	at org.iq80.leveldb.util.TableIterator.seekInternal(TableIterator.java:57)
    	at org.iq80.leveldb.util.TableIterator.seekInternal(TableIterator.java:26)
    	at org.iq80.leveldb.util.AbstractSeekingIterator.seek(AbstractSeekingIterator.java:41)
    	at org.iq80.leveldb.util.InternalTableIterator.seekInternal(InternalTableIterator.java:45)
    	at org.iq80.leveldb.util.InternalTableIterator.seekInternal(InternalTableIterator.java:25)
    	at org.iq80.leveldb.util.AbstractSeekingIterator.seek(AbstractSeekingIterator.java:41)
    	at org.iq80.leveldb.impl.Level.get(Level.java:134)
    	at org.iq80.leveldb.impl.Version.get(Version.java:172)
    	at org.iq80.leveldb.impl.VersionSet.get(VersionSet.java:223)
    	at org.iq80.leveldb.impl.DbImpl.get(DbImpl.java:616)
    	at org.iq80.leveldb.impl.DbImpl.get(DbImpl.java:577)
    	at nxt.crosschain.db.ChainstateIterator.extractKey(ChainstateIterator.java:110)
    	at nxt.crosschain.db.ChainstateIterator.<init>(ChainstateIterator.java:44)
    	at nxt.crosschain.db.LevelDBImportWindowsTest.main(LevelDBImportWindowsTest.java:67)
    Caused by: java.nio.channels.ClosedChannelException
    	at sun.nio.ch.FileChannelImpl.ensureOpen(FileChannelImpl.java:110)
    	at sun.nio.ch.FileChannelImpl.read(FileChannelImpl.java:721)
    	at org.iq80.leveldb.table.FileChannelTable.read(FileChannelTable.java:96)
    	at org.iq80.leveldb.table.FileChannelTable.readBlock(FileChannelTable.java:55)
    	at org.iq80.leveldb.table.Table.openBlock(Table.java:80)
    	... 15 more
    

    on Windows 7, x86 using JDK 1.8.0_161 what could be the problem with nio.channels?

    opened by metroal 9
  • Caused by: org.iq80.leveldb.impl.DbImpl$BackgroundProcessingException: java.io.FileNotFoundException: leveldb/mapdbcache/021748.sst (Too many open files)

    Caused by: org.iq80.leveldb.impl.DbImpl$BackgroundProcessingException: java.io.FileNotFoundException: leveldb/mapdbcache/021748.sst (Too many open files)

    @dain

    This is occuring with v 0.7. We are inserting the data into leveldb, but after ever 3 hours, we see exceptions as

    Caused by: org.iq80.leveldb.impl.DbImpl$BackgroundProcessingException: java.io.FileNotFoundException: leveldb/mapdbcache/021748.sst (Too many open files)
        at org.iq80.leveldb.impl.DbImpl.checkBackgroundException(DbImpl.java:411)
        at org.iq80.leveldb.impl.DbImpl.get(DbImpl.java:572)
        at org.iq80.leveldb.impl.DbImpl.get(DbImpl.java:565)
        at com.shn.logs.cache.impl.LevelDBByteStore.getBytes(LevelDBByteStore.java:79)
        at com.shn.logs.cache.impl.LevelDBGenericStore.get(LevelDBGenericStore.java:99)
        at com.shn.logs.cache.impl.FileBackedUniqueItemsStore$1.call(FileBackedUniqueItemsStore.java:89)
        at com.shn.logs.cache.impl.FileBackedUniqueItemsStore$1.call(FileBackedUniqueItemsStore.java:85)
        at com.shn.logs.cache.impl.LevelDBGenericStore.lockedWriteOp(LevelDBGenericStore.java:126)
        ... 16 more
    Caused by: java.io.FileNotFoundException: leveldb/mapdbcache/021748.sst (Too many open files)
        at java.io.FileOutputStream.open0(Native Method)
        at java.io.FileOutputStream.open(FileOutputStream.java:270)
        at java.io.FileOutputStream.<init>(FileOutputStream.java:213)
        at java.io.FileOutputStream.<init>(FileOutputStream.java:162)
        at org.iq80.leveldb.impl.DbImpl.openCompactionOutputFile(DbImpl.java:1134)
        at org.iq80.leveldb.impl.DbImpl.doCompactionWork(DbImpl.java:1085)
        at org.iq80.leveldb.impl.DbImpl.backgroundCompaction(DbImpl.java:478)
        at org.iq80.leveldb.impl.DbImpl.backgroundCall(DbImpl.java:426)
        at org.iq80.leveldb.impl.DbImpl.access$100(DbImpl.java:83)
        at org.iq80.leveldb.impl.DbImpl$2.call(DbImpl.java:396)
        at org.iq80.leveldb.impl.DbImpl$2.call(DbImpl.java:390)
        at java.util.concurrent.FutureTask.run(FutureTask.java:266)
        ... 3 more
    

    I looked at the source for v 0.7, which points me to

            compactionState.outfile = new FileOutputStream(file).getChannel();
    

    and the call chain tells me that levelDB is running the compaction in background (is that correct?)

    Also, when we set up an instance of leveldb, we do not specify anything related to compaction.

     private DB init(File storeLocation, long cacheSizeInMb, DBComparator dbComparator) {
        Options options = new Options()
            .cacheSize(ByteUnit.MEGA.of(cacheSizeInMb))
            .comparator(dbComparator)
            .compressionType(CompressionType.NONE)
            .createIfMissing(true);
        try {
          return Iq80DBFactory.factory.open(storeLocation, options);
        } catch (IOException e) {
          throw new ShnRuntimeException("Failed to open LevelDB store", e);
        }
      }
    

    Do I need to? Am I missing anything?

    opened by hhimanshu 9
  • Build fails

    Build fails

    Maven build fails with following error :

    Tests run: 73, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 230.714 sec <<< FAILURE!

    Results :

    Failed tests: testCantCreateDirectoryReturnMessage(org.iq80.leveldb.impl.DbImplTest):

    Tests run: 73, Failures: 1, Errors: 0, Skipped: 0

    opened by tarung 8
  • read (get) performance at scale?

    read (get) performance at scale?

    I'm hoping for some help, or at least some feedback. I'm getting wonderful performance with small databases, say 4 million rows or less, but dramatically worse performance as the dataset grows. At 20 M rows, I get performance of about 3,000 random reads per second, with a large (4GB) cache. I don't use Compression because most of the data is already compressed. The original LDB C++ benchmark says they get better than 129,000 random reads per second with 100 byte keys and a million records.

    Here is my configuration: leveldb version 0.7 Java 8 new OSX MacBookPro. 16GB ram, SSD, 4 Cores. values are 800 bytes each

    option values: private static final int MAX_OPEN_FILES = 7500; private static final long CACHE_SIZE_MB = 4000; private static final int WRITE_BUFFER_MB = 200; private static final int BLOCK_RESTART_INTERVAL = 32;

    Almost all the time is in the call to db.get(). Does this kind of performance seem reasonable? Also wondering if anyone use this version for 10s to 100s of millions of records, or is this the best it can do?

    opened by lwhite1 6
  • Comparator passed via Options.comparator() doesn't get invoked.

    Comparator passed via Options.comparator() doesn't get invoked.

    Custom Comparator passed in following manner get ignored :

    Options opts = new Options(); opts.createIfMissing(true); opts.comparator(new Mycomparator());

    DBFactory fctry = Iq80DBFactory.factory; DB db = fctry.open(new File("./data/leveldbfiles"), opts);

    Is there any other way to pass a custom Comparator ?

    Regards, Tarun

    opened by tarung 6
  • Large SST files (ignoring VersionSet.TARGET_FILE_SIZE?)

    Large SST files (ignoring VersionSet.TARGET_FILE_SIZE?)

    In my databases, SSTable file sizes are more or less the same within an individual db. Varying either the content or db options, can cause sst files to be ~2MB in one database, ~23MB in another, ~39MB in a third, etc. That's the approximate size of every sst file in the system, in databases with dozens to hundreds of SST files.

    Is there something I can do to control this? The large db files perform poorly on read-heavy loads.

    thanks,

    opened by lwhite1 5
  • Close FileInputStream

    Close FileInputStream

    Having lots of data written into the database, the database ran into "Too many open files". In order to correct this, the FileInputStreams should be closed via the AutoCloseable... (Also with this fix, maxOpenFiles should work as the FileInputStreams of the cache get properly released, see Bug #37 )

    opened by maxboehm 5
  • Creating a seconday Index?

    Creating a seconday Index?

    Hey @dain I have a usecase where I wanted a BiDirectional Map, where I can search on both key and value (they both are unique). Currently, I store them twice in 2 different databases

    String key = // generated from somewhere
    String hashedKey = // encrypt key and save
    
    mapDb.put(key, hashedKey)
    decryptMapDb(hashKey, key)
    

    Our key and hashedKey both are unique

    I was reading on leveldb google groups, where someone recommended to create secondary index. The link is here

    opened by hhimanshu 4
  • Background-Compaction fails

    Background-Compaction fails

    I got following exception when datasize about to 110G, abount 1800,000,000 rows. need help,thanks

           Options options = new Options();
        options.createIfMissing(false);
    
        options.compressionType(CompressionType.SNAPPY);
    
        DBFactory factory = Iq80DBFactory.factory;
    
    
        DB db;
        try {
            db = factory.open(from, options);
        } catch (IOException e) {
            throw new java.lang.RuntimeException(e);
        }
    

    org.iq80.leveldb.impl.DbImpl$BackgroundProcessingException: java.lang.NullPointerException at org.iq80.leveldb.impl.DbImpl.checkBackgroundException(DbImpl.java:411) at org.iq80.leveldb.impl.DbImpl.get(DbImpl.java:572) at org.iq80.leveldb.impl.DbImpl.get(DbImpl.java:565) ............................................................................................ at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) Caused by: java.lang.NullPointerException at org.iq80.leveldb.impl.Compaction.totalFileSize(Compaction.java:129) at org.iq80.leveldb.impl.Compaction.isTrivialMove(Compaction.java:120) at org.iq80.leveldb.impl.DbImpl.backgroundCompaction(DbImpl.java:468) at org.iq80.leveldb.impl.DbImpl.backgroundCall(DbImpl.java:426) at org.iq80.leveldb.impl.DbImpl.access$100(DbImpl.java:83) at org.iq80.leveldb.impl.DbImpl$2.call(DbImpl.java:396) at org.iq80.leveldb.impl.DbImpl$2.call(DbImpl.java:390)

    opened by mapbased 4
  • [SECURITY] Fix Temporary File Information Disclosure Vulnerability

    [SECURITY] Fix Temporary File Information Disclosure Vulnerability

    Security Vulnerability Fix

    This pull request fixes a Temporary File Information Disclosure Vulnerability, which existed in this project.

    Preamble

    The system temporary directory is shared between all users on most unix-like systems (not MacOS, or Windows). Thus, code interacting with the system temporary directory must be careful about file interactions in this directory, and must ensure that the correct file posix permissions are set.

    This PR was generated because a call to File.createTempFile(..) was detected in this repository in a way that makes this project vulnerable to local information disclosure. With the default uname configuration, File.createTempFile(..) creates a file with the permissions -rw-r--r--. This means that any other user on the system can read the contents of this file.

    Impact

    Information in this file is visible to other local users, allowing a malicious actor co-resident on the same machine to view potentially sensitive files.

    Other Examples

    The Fix

    The fix has been to convert the logic above to use the following API that was introduced in Java 1.7.

    File tmpDir = Files.createTempFile("temp dir").toFile();
    

    The API both creates the file securely, ie. with a random, non-conflicting name, with file permissions that only allow the currently executing user to read or write the contents of this file. By default, Files.createTempFile("temp dir") will create a file with the permissions -rw-------, which only allows the user that created the file to view/write the file contents.

    :arrow_right: Vulnerability Disclosure :arrow_left:

    :wave: Vulnerability disclosure is a super important part of the vulnerability handling process and should not be skipped! This may be completely new to you, and that's okay, I'm here to assist!

    First question, do we need to perform vulnerability disclosure? It depends!

    1. Is the vulnerable code only in tests or example code? No disclosure required!
    2. Is the vulnerable code in code shipped to your end users? Vulnerability disclosure is probably required!

    Vulnerability Disclosure How-To

    You have a few options options to perform vulnerability disclosure. However, I'd like to suggest the following 2 options:

    1. Request a CVE number from GitHub by creating a repository-level GitHub Security Advisory. This has the advantage that, if you provide sufficient information, GitHub will automatically generate Dependabot alerts for your downstream consumers, resolving this vulnerability more quickly.
    2. Reach out to the team at Snyk to assist with CVE issuance. They can be reached at the Snyk's Disclosure Email.

    Detecting this and Future Vulnerabilities

    This vulnerability was automatically detected by GitHub's CodeQL using this CodeQL Query.

    You can automatically detect future vulnerabilities like this by enabling the free (for open-source) GitHub Action.

    I'm not an employee of GitHub, I'm simply an open-source security researcher.

    Source

    This contribution was automatically generated with an OpenRewrite refactoring recipe, which was lovingly hand crafted to bring this security fix to your repository.

    The source code that generated this PR can be found here: SecureTempFileCreation

    Opting-Out

    If you'd like to opt-out of future automated security vulnerability fixes like this, please consider adding a file called .github/GH-ROBOTS.txt to your repository with the line:

    User-agent: JLLeitschuh/security-research
    Disallow: *
    

    This bot will respect the ROBOTS.txt format for future contributions.

    Alternatively, if this project is no longer actively maintained, consider archiving the repository.

    CLA Requirements

    This section is only relevant if your project requires contributors to sign a Contributor License Agreement (CLA) for external contributions.

    It is unlikely that I'll be able to directly sign CLAs. However, all contributed commits are already automatically signed-off.

    The meaning of a signoff depends on the project, but it typically certifies that committer has the rights to submit this work under the same license and agrees to a Developer Certificate of Origin (see https://developercertificate.org/ for more information).

    - Git Commit Signoff documentation

    If signing your organization's CLA is a strict-requirement for merging this contribution, please feel free to close this PR.

    Sponsorship & Support

    This contribution is sponsored by HUMAN Security Inc. and the new Dan Kaminsky Fellowship, a fellowship created to celebrate Dan's memory and legacy by funding open-source work that makes the world a better (and more secure) place.

    This PR was generated by Moderne, a free-for-open source SaaS offering that uses format-preserving AST transformations to fix bugs, standardize code style, apply best practices, migrate library versions, and fix common security vulnerabilities at scale.

    Tracking

    All PR's generated as part of this fix are tracked here: https://github.com/JLLeitschuh/security-research/issues/18

    opened by JLLeitschuh 0
  • Using Snapshots in other classes

    Using Snapshots in other classes

    Hello,

    I have followed the documentation and there is limited information on how to use Snapshots in other methods and classes. When I open a database instance, take a snapshot, then export it to a new method, the entire JVM crashes which I have never seen before:

    A fatal error has been detected by the Java Runtime Environment:

    Here is how I am grabbing the Snapshot:

        public synchronized DBIterator getSnapShot() throws IOException {
    
            ReadOptions options = new ReadOptions();
            
            DBIterator iterator = null;
    
            try {
                setLinkPacDatabase(factory.open(new File(Finals.LINKPACDB_FILEPATH), setupLevelDbOptions(true, false)));
                options.snapshot(getLinkPacDatabase().getSnapshot());
                iterator = getLinkPacDatabase().iterator(options);
            } catch (Exception e) {
                e.printStackTrace();
            } finally {
                getLinkPacDatabase().close();
            }
            return iterator;
        }
    

    But when I try to iterate the Snapshot like below in other classes, the JVM crashes:

        for (iterator.seekToFirst(); iterator.hasNext(); iterator.next()) {
    

    What is the correct way to take a Snapshot of the database and use it elsewhere?

    opened by platacc 3
  • Vulnerabilities from dependencies: CVE-2020-8908 CVE-2018-10237

    Vulnerabilities from dependencies: CVE-2020-8908 CVE-2018-10237

    opened by MarkLTZ 0
  • Fix an issue where a large byte array may get pinned down in memory

    Fix an issue where a large byte array may get pinned down in memory

    which could lead to out of memory.

    Why: FileMetadata stores smallest and largest key internally to identity the start and end key range in a given file. The key is essentially a slice of a byte array storing a key-value entry. This causes the byte array to be pinned down in memory, which can be an issue when the value is fairly large. Eventually this can lead to out of memory.

    What: The change is to make a copy of the key (which should be typically small) into a new byte array so the original byte array (which can be large) can be freed.

    opened by wyzhang 0
  • compress Slice when create filemetadata

    compress Slice when create filemetadata

    image

    when DbImpl used as singleton bean in spring application context for a long time, the oom ex occured after inspect the heap dump file i found the actually length of filemetadata.largest.userKey is 33, while size of filemetadata.largest.userKey.data is 10485kb

    opened by Salpadding 2
Owner
Dain Sundstrom
Creator of Presto, Software Engineer
Dain Sundstrom
High Performance data structures and utility methods for Java

Agrona Agrona provides a library of data structures and utility methods that are a common need when building high-performance applications in Java. Ma

Real Logic 2.5k Jan 5, 2023
Bloofi: A java implementation of multidimensional Bloom filters

Bloofi: A java implementation of multidimensional Bloom filters Bloom filters are probabilistic data structures commonly used for approximate membersh

Daniel Lemire 71 Nov 2, 2022
A high performance caching library for Java

Caffeine is a high performance, near optimal caching library. For more details, see our user's guide and browse the API docs for the latest release. C

Ben Manes 13k Jan 5, 2023
Chronicle Bytes has a similar purpose to Java NIO's ByteBuffer with many extensions

Chronicle-Bytes Chronicle-Bytes Chronicle Bytes contains all the low level memory access wrappers. It is built on Chronicle Core’s direct memory and O

Chronicle Software : Open Source 334 Jan 1, 2023
High performance Java implementation of a Cuckoo filter - Apache Licensed

Cuckoo Filter For Java This library offers a similar interface to Guava's Bloom filters. In most cases it can be used interchangeably and has addition

Mark Gunlogson 161 Dec 30, 2022
An advanced, but easy to use, platform for writing functional applications in Java 8.

Getting Cyclops X (10) The latest version is cyclops:10.4.0 Stackoverflow tag cyclops-react Documentation (work in progress for Cyclops X) Integration

AOL 1.3k Dec 29, 2022
Eclipse Collections is a collections framework for Java with optimized data structures and a rich, functional and fluent API.

English | 中文 | Deutsch | Español | Ελληνικά | Français | 日本語 | Norsk (bokmål) | Português-Brasil | Русский | हिंदी Eclipse Collections is a comprehens

Eclipse Foundation 2.1k Dec 29, 2022
External-Memory Sorting in Java

Externalsortinginjava External-Memory Sorting in Java: useful to sort very large files using multiple cores and an external-memory algorithm. The vers

Daniel Lemire 235 Dec 29, 2022
A Java library for quickly and efficiently parsing and writing UUIDs

fast-uuid fast-uuid is a Java library for quickly and efficiently parsing and writing UUIDs. It yields the most dramatic performance gains when compar

Jon Chambers 142 Jan 1, 2023
Geohash utitlies in java

geo Java utility methods for geohashing. Status: production, available on Maven Central Maven site reports are here including javadoc. Add this to you

Dave Moten 386 Jan 1, 2023
Hollow is a java library and toolset for disseminating in-memory datasets from a single producer to many consumers for high performance read-only access.

Hollow Hollow is a java library and toolset for disseminating in-memory datasets from a single producer to many consumers for high performance read-on

Netflix, Inc. 1.1k Dec 25, 2022
High Performance Primitive Collections for Java

HPPC: High Performance Primitive Collections Collections of primitive types (maps, sets, stacks, lists) with open internals and an API twist (no java.

Carrot Search 890 Dec 28, 2022
Java library for the HyperLogLog algorithm

java-hll A Java implementation of HyperLogLog whose goal is to be storage-compatible with other similar offerings from Aggregate Knowledge. NOTE: This

Aggregate Knowledge (a Neustar service) 296 Dec 30, 2022
A simple integer compression library in Java

JavaFastPFOR: A simple integer compression library in Java License This code is released under the Apache License Version 2.0 http://www.apache.org/li

Daniel Lemire 487 Dec 30, 2022
Java Collections till the last breadcrumb of memory and performance

Koloboke A family of projects around collections in Java (so far). The Koloboke Collections API A carefully designed extension of the Java Collections

Roman Leventov 967 Nov 14, 2022
LMDB for Java

LMDB JNI LMDB JNI provide a Java API to LMDB which is an ultra-fast, ultra-compact key-value embedded data store developed by Symas for the OpenLDAP P

deephacks 201 Apr 6, 2022
Lightning Memory Database (LMDB) for Java: a low latency, transactional, sorted, embedded, key-value store

LMDB for Java LMDB offers: Transactions (full ACID semantics) Ordered keys (enabling very fast cursor-based iteration) Memory-mapped files (enabling o

null 680 Dec 23, 2022
LWJGL is a Java library that enables cross-platform access to popular native APIs useful in the development of graphics (OpenGL, Vulkan), audio (OpenAL), parallel computing (OpenCL, CUDA) and XR (OpenVR, LibOVR) applications.

LWJGL - Lightweight Java Game Library 3 LWJGL (https://www.lwjgl.org) is a Java library that enables cross-platform access to popular native APIs usef

Lightweight Java Game Library 4k Dec 29, 2022