Hollow is a java library and toolset for disseminating in-memory datasets from a single producer to many consumers for high performance read-only access.

Overview

Hollow Logo

Hollow

Build Status Join the chat at https://gitter.im/Netflix/hollow NetflixOSS Lifecycle Download

Hollow is a java library and toolset for disseminating in-memory datasets from a single producer to many consumers for high performance read-only access. Read more.

Documentation is available at http://hollow.how.

Getting Started

We recommend jumping into the quick start guide — you'll have a demo up and running in minutes, and a fully production-scalable implementation of Hollow at your fingertips in about an hour. From there, you can plug in your data model and it's off to the races.

Get Hollow

Release binaries are available from Maven Central and jCenter.

GroupID/Org ArtifactID/Name Latest Stable Version
com.netflix.hollow hollow 5.1.3

In a Maven .pom file:

    ...
    <dependency>
            <groupId>com.netflix.hollow</groupId>
            <artifactId>hollow</artifactId>
            <version>5.1.3</version>
    </dependency>
    ...

In a Gradle build.gradle file:

    ...
    compile 'com.netflix.hollow:hollow:5.1.3'
    ...

Release candidate binaries, matching the -rc\.* pattern for an artifact's version, are available from the jCenter oss-candidate repository, which may be declared in a build.gradle file:

    ...
    repositories {
        maven {
            url 'https://dl.bintray.com/netflixoss/oss-candidate/'
        }
    }
    ...

Get Support

Hollow is maintained by the Platform Data Technologies team at Netflix. Support can be obtained directly from us or from fellow users through Gitter or by opening an issue in this project.

LICENSE

Copyright (c) 2016 Netflix, Inc.

Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at

http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.

Comments
  • Support ARM64/AWS Graviton Instances by not performing unaligned accesses

    Support ARM64/AWS Graviton Instances by not performing unaligned accesses

    Resolves https://github.com/Netflix/hollow/issues/517 and allows Hollow to be used on ARM based machines such as AWS Graviton instances.

    There are a few things left to finish up, but I wanted to get this in front of people sooner rather than later to get some feedback on the design here.

    What I've done is swapped HollowUnsafeHandle from being simply a way to get an instance of sun.misc.Unsafe to a singleton class wrapping sun.misc.Unsafe. It seems the only classes that perform unaligned accesses are the FixedLengthElementArray and SegmentedLongArray families, so it might also make sense to have only those classes call an unaligned helper method on HollowUnsafeHandle, rather than having all unsafe calls go through this class. Let me know if this would be preferred.

    I've tested this (but only the consumer side) on an r6g.4xlarge instance on AWS, and performance is very good. Also, the kernel doesn't kill it with a SIGBUS :slightly_smiling_face:

    I'm new to the sun.misc.Unsafe API, so there are a few things I haven't been able to figure out how to do yet:

    • How do we detect whether the host platform supports unaligned accesses?

    OpenJDK 8 does it like this internally, but there seems to not be a public-facing API version of this, at least in JDK8. Maybe a configuration setting would be most appropriate if there's no reliable way to auto detect it.

    • Improve efficiency by checking the alignment of the address

    I'd like to check if the address is aligned, and use the faster putOrderedLong/getLong calls directly if possible. Since all the accesses use the object + offset form instead of raw addresses, there doesn't seem to be a way to do this unless we can retrieve the address of the base object, which doesn't seem to be possible through sun.misc.Unsafe.

    Thanks for your time.

    opened by Dillonb 11
  • Metrics and metrics collector

    Metrics and metrics collector

    This is for https://github.com/Netflix/hollow/issues/76

    This is only the first phase which contains the implementation for the core metrics.

    The external module would be implemented in a 2nd phase

    opened by rpalcolea 11
  • Override default for 'sourceCompatibility' from 'nebula.netflixoss', update latest stable version to '3.0.1'

    Override default for 'sourceCompatibility' from 'nebula.netflixoss', update latest stable version to '3.0.1'

    I've moved setting of values for sourceCompatibility and targetCompatibility to subprojects section to override the default setting from nebula.netflixoss plugin. Without it, after checking out project failed to build with following error:

    A problem occurred configuring project ':hollow'.
    > Could not locate a compatible JDK for target compatibility 1.7. Change the source/target compatibility, set a JDK_17 environment variable with the location, or install to one of the default search locations
    

    I've also noticed that latest stable version hasn't been updated for quite some time so I'd like to change that.

    opened by blekit 9
  • gradle plugin for consumer api generation

    gradle plugin for consumer api generation

    According to https://github.com/Netflix/hollow/issues/31

    I need some help - push commits or give explanation, whatever fits you best

    1. I am not sure what is correct way for publishing plugins and stuff under Netflix TM, should it be published to some special place or directly to https://plugins.gradle.org/ ? I haven't published it yet anywhere, so only local publishing is available at the moment if you wanna see it in action

    2. It's also a question for me - how to version plugin - separately from hollow-core (if so - then what is it - 1.0.0 or 0.1.0?) or inherit from root

    3. And my english is not very good, so README might be kinda awkward and some names over-detailed

    opened by IgorPerikov 9
  • Hollow Documentation

    Hollow Documentation

    Are there plans to open source the docs in http://hollow.how? It would be nice to contribute with things like the incremental producer, google cloud storage and other examples

    enhancement 
    opened by rpalcolea 8
  • Replace all calls to putOrderedLong with putLong

    Replace all calls to putOrderedLong with putLong

    An alternative approach to #518

    It seems like the culprit wasn't the unaligned write, but rather the memory fence that putOrderedLong creates.

    This is a much cleaner solution that still provides all the safety of putOrderedLong through the calls to storeFence().

    opened by Dillonb 5
  • Comparison with Kafka

    Comparison with Kafka

    Might be possibly stupid question, but why not to just use Kafka? In Kafka too, we can have a single producer producing data and multiple consumers consuming that data. The only difference I can see is that in Kafka, the data passes through a rather elaborate broker system; Producer -> Broker -> Consumer. Whereas in Hollow, its directly Producer -> Consumer.

    opened by Anmol-Singh-Jaggi 5
  • Improve safety of assigned ordinals

    Improve safety of assigned ordinals

    It's possible to accidentally misuse the __assigned_ordinal field:

    • If it's defined as int, the value is used as the ordinal
    • If the value is accidentally final, it fails silently (very easy to do with Kotlin val)
    • HollowConstants defines ORDINAL_NONE as an int which could lead to confusion (and makes Kotlin unhappy - it wants you to be explicit about int -> long conversions)
        /**
         * An ordinal of NULL_ORDINAL signifies "null reference" or "no ordinal"
         */
        int ORDINAL_NONE = -1;
    
    bug 
    opened by DanielThomas 5
  • Add index and hash JMH benchmarks

    Add index and hash JMH benchmarks

    I needed to make some decisions about how to best model/index data for a performance sensitive use case and rather than go through trial and error in the actual implementation I figured I'd add some benchmarks here instead.

    The full suite has a lot of permutations, so would several hours to run, so I cherry picked a couple of configurations that gives a meaningful comparison. Would appreciate some feedback on the methodology to make sure the results are correct.

    Our use case would currently have ~65 million objects, with cardinality ranging from 1 to 100 thousand - the main questions I was trying to answer here were:

    • Nested or non-nested objects for index fields
    • Compound or non-compound keys
    • Should we seek to reduce cardinality by adding separate collection holder type and group them in the consumer, or is HashIndex creation/load performance good enough in the consumer

    HashIndex

    cardinality refers to the number of matches (i.e. number of objects with identical fields).

    Benchmark                                    (cardinality)  (nested)  (querySize)   (size)  Mode  Cnt    Score   Error  Units
    HollowHashIndexBenchmark.createIndex                     1     false            1  1000000  avgt         0.470           s/op
    HollowHashIndexBenchmark.createIndex                     1     false            4  1000000  avgt         0.511           s/op
    HollowHashIndexBenchmark.createIndex                     1      true            1  1000000  avgt         0.472           s/op
    HollowHashIndexBenchmark.createIndex                     1      true            4  1000000  avgt         0.555           s/op
    HollowHashIndexBenchmark.findMatches                     1     false            1  1000000  avgt       369.577          ns/op
    HollowHashIndexBenchmark.findMatches                     1     false            4  1000000  avgt       407.978          ns/op
    HollowHashIndexBenchmark.findMatches                     1      true            1  1000000  avgt       620.034          ns/op
    HollowHashIndexBenchmark.findMatches                     1      true            4  1000000  avgt       642.681          ns/op
    HollowHashIndexBenchmark.findMatchesMissing              1     false            1  1000000  avgt        46.252          ns/op
    HollowHashIndexBenchmark.findMatchesMissing              1     false            4  1000000  avgt        35.563          ns/op
    HollowHashIndexBenchmark.findMatchesMissing              1      true            1  1000000  avgt        64.633          ns/op
    HollowHashIndexBenchmark.findMatchesMissing              1      true            4  1000000  avgt        35.785          ns/op
    

    Performance improves as cardinality increases:

    Benchmark                                    (cardinality)  (nested)  (querySize)   (size)  Mode  Cnt    Score   Error  Units
    HollowHashIndexBenchmark.createIndex                  1000     false            1  1000000  avgt         0.001           s/op
    HollowHashIndexBenchmark.createIndex                  1000     false            4  1000000  avgt         0.002           s/op
    HollowHashIndexBenchmark.createIndex                  1000      true            1  1000000  avgt         0.001           s/op
    HollowHashIndexBenchmark.createIndex                  1000      true            4  1000000  avgt         0.002           s/op
    HollowHashIndexBenchmark.findMatches                  1000     false            1  1000000  avgt        65.274          ns/op
    HollowHashIndexBenchmark.findMatches                  1000     false            4  1000000  avgt       133.855          ns/op
    HollowHashIndexBenchmark.findMatches                  1000      true            1  1000000  avgt        77.624          ns/op
    HollowHashIndexBenchmark.findMatches                  1000      true            4  1000000  avgt       221.643          ns/op
    HollowHashIndexBenchmark.findMatchesMissing           1000     false            1  1000000  avgt        22.621          ns/op
    HollowHashIndexBenchmark.findMatchesMissing           1000     false            4  1000000  avgt        34.524          ns/op
    HollowHashIndexBenchmark.findMatchesMissing           1000      true            1  1000000  avgt        22.642          ns/op
    HollowHashIndexBenchmark.findMatchesMissing           1000      true            4  1000000  avgt        34.178          ns/op
    

    PrimaryKeyIndex

    Benchmark                                                 (nested)  (querySize)   (size)  Mode  Cnt    Score   Error  Units
    HollowPrimaryKeyIndexBenchmark.createIndex                   false            1  1000000  avgt         0.063           s/op
    HollowPrimaryKeyIndexBenchmark.createIndex                   false            4  1000000  avgt         0.095           s/op
    HollowPrimaryKeyIndexBenchmark.createIndex                    true            1  1000000  avgt         0.116           s/op
    HollowPrimaryKeyIndexBenchmark.createIndex                    true            4  1000000  avgt         0.189           s/op
    HollowPrimaryKeyIndexBenchmark.getMatchingOrdinal            false            1  1000000  avgt       305.982          ns/op
    HollowPrimaryKeyIndexBenchmark.getMatchingOrdinal            false            4  1000000  avgt       333.696          ns/op
    HollowPrimaryKeyIndexBenchmark.getMatchingOrdinal             true            1  1000000  avgt       563.586          ns/op
    HollowPrimaryKeyIndexBenchmark.getMatchingOrdinal             true            4  1000000  avgt       595.999          ns/op
    HollowPrimaryKeyIndexBenchmark.getMatchingOrdinalMissing     false            1  1000000  avgt        40.421          ns/op
    HollowPrimaryKeyIndexBenchmark.getMatchingOrdinalMissing     false            4  1000000  avgt        36.977          ns/op
    HollowPrimaryKeyIndexBenchmark.getMatchingOrdinalMissing      true            1  1000000  avgt        73.481          ns/op
    HollowPrimaryKeyIndexBenchmark.getMatchingOrdinalMissing      true            4  1000000  avgt        41.947          ns/op
    

    HashCodes

    I went with int keys in the benchmarks, to avoid the hash function being a significant contributor. Added some benchmarks to look at the hash functions separately. You might want to look at the String variant of hashCode, there's some weirdness about multibyte Strings and has worst performance than hashing a byte array of the same length:

    Benchmark                                (length)  (multibyte)  Mode  Cnt     Score     Error  Units
    Benchmark                               (length)  Mode  Cnt     Score   Error  Units
    HashCodesBenchmark.hashBytes                   1  avgt          7.247          ns/op
    HashCodesBenchmark.hashBytes                   2  avgt          7.857          ns/op
    HashCodesBenchmark.hashBytes                   3  avgt          7.871          ns/op
    HashCodesBenchmark.hashBytes                   5  avgt          8.933          ns/op
    HashCodesBenchmark.hashBytes                  10  avgt         12.028          ns/op
    HashCodesBenchmark.hashBytes                 100  avgt         56.531          ns/op
    HashCodesBenchmark.hashBytes                1000  avgt        524.596          ns/op
    HashCodesBenchmark.hashInt                     1  avgt          2.630          ns/op
    HashCodesBenchmark.hashInt                     2  avgt          2.671          ns/op
    HashCodesBenchmark.hashInt                     3  avgt          2.656          ns/op
    HashCodesBenchmark.hashInt                     5  avgt          2.656          ns/op
    HashCodesBenchmark.hashInt                    10  avgt          2.638          ns/op
    HashCodesBenchmark.hashInt                   100  avgt          2.633          ns/op
    HashCodesBenchmark.hashInt                  1000  avgt          2.628          ns/op
    HashCodesBenchmark.hashLong                    1  avgt          2.705          ns/op
    HashCodesBenchmark.hashLong                    2  avgt          2.701          ns/op
    HashCodesBenchmark.hashLong                    3  avgt          2.715          ns/op
    HashCodesBenchmark.hashLong                    5  avgt          2.731          ns/op
    HashCodesBenchmark.hashLong                   10  avgt          2.737          ns/op
    HashCodesBenchmark.hashLong                  100  avgt          2.715          ns/op
    HashCodesBenchmark.hashLong                 1000  avgt          2.736          ns/op
    HashCodesBenchmark.hashString                  1  avgt          9.322          ns/op
    HashCodesBenchmark.hashString                  2  avgt         10.185          ns/op
    HashCodesBenchmark.hashString                  3  avgt         10.487          ns/op
    HashCodesBenchmark.hashString                  5  avgt         13.324          ns/op
    HashCodesBenchmark.hashString                 10  avgt         18.794          ns/op
    HashCodesBenchmark.hashString                100  avgt         98.877          ns/op
    HashCodesBenchmark.hashString               1000  avgt        868.692          ns/op
    HashCodesBenchmark.hashStringMultibyte         1  avgt         24.457          ns/op
    HashCodesBenchmark.hashStringMultibyte         2  avgt         26.278          ns/op
    HashCodesBenchmark.hashStringMultibyte         3  avgt         30.154          ns/op
    HashCodesBenchmark.hashStringMultibyte         5  avgt         31.497          ns/op
    HashCodesBenchmark.hashStringMultibyte        10  avgt         43.098          ns/op
    HashCodesBenchmark.hashStringMultibyte       100  avgt        286.791          ns/op
    HashCodesBenchmark.hashStringMultibyte      1000  avgt       2789.318          ns/op
    
    opened by DanielThomas 5
  • Bug fix for deleted records being inserted into the write state as Object

    Bug fix for deleted records being inserted into the write state as Object

    This caused a restore operation to fail with "java.lang.IllegalStateException: When state engine was restored, not all necessary states were present! Unrestored states: [Object]"

    Used static DELETE_RECORD object from HollowIncrementalCyclePopulator when adding items to be deleted to the mutations collection Removed misleading DELETE_RECORD from HollowIncrementalProducer

    opened by duro1 5
  • Add com.netflix.hollow.core.type support for Byte

    Add com.netflix.hollow.core.type support for Byte

    Rather than generating Byte classes when we encouunter a Byte as part of another Type, introduce a top-level Byte type as part of the core hollow types. Bytes are stored as integers under the hood, and the generated classes will use integers, similar to how a primitive byte gets converted to an int. Adding support for byte to the hollow schema is out of scope for this PR.

    opened by akhaku 5
  • Move from velocity:1.7 to velocity-engine-core:2.3

    Move from velocity:1.7 to velocity-engine-core:2.3

    A simplistic attempt to resolve https://github.com/Netflix/hollow/issues/578 security issue. I have just checked that the build passes, no manual testing done on my side.

    opened by rydenius 2
  • SQL-like querying for Hollow Explorer

    SQL-like querying for Hollow Explorer

    Hey everyone!

    We're using Hollow a lot in our projects and Hollow Explorer is a really great (fast and snappy) tool to search through the data.

    One feature that we believe would be useful in Explorer is to be able to select the fields in the ordinals that you get in a search, and get a paged result with that data, so that you can export it in CSV/TXT format.

    Use-case: launch a query after some key, you get back 100 ordinals. If you want to look on some other fields from the returned data, you need to click on each ordinal. It would be really useful if you could just add the fields you need from the response when querying, and only those should returned.

    If this is already supported, please let me know, I didn't manage to find this feature.

    If that's not the case, what do you think about it?

    Thanks!

    opened by cosmin-ionita 0
  • Condition 'doubleSnapshotConfig.allowDoubleSnapshot()' should be negated

    Condition 'doubleSnapshotConfig.allowDoubleSnapshot()' should be negated

    https://github.com/Netflix/hollow/blob/d4d3dbed944a8885ba128843a61e569ddbcce08e/hollow/src/main/java/com/netflix/hollow/api/client/HollowDataHolder.java#L121

    The code is inconsistent with the comments 'If the consumer is configured to only follow deltas (no double snapshot) then any failure to transition will cause the consumer to become "stuck" on stale data', so the condition 'doubleSnapshotConfig.allowDoubleSnapshot()' should be negated

    opened by wangsfx 0
  • Error while writing the new state

    Error while writing the new state

    java.lang.RuntimeException: java.util.concurrent.ExecutionException: java.lang.NullPointerException
    	at com.netflix.hollow.api.producer.HollowIncrementalCyclePopulator.addRecords(HollowIncrementalCyclePopulator.java:144) ~[golftec-api-1.0-jar-with-dependencies.jar:na]
    	at com.netflix.hollow.api.producer.HollowIncrementalCyclePopulator.populate(HollowIncrementalCyclePopulator.java:53) ~[golftec-api-1.0-jar-with-dependencies.jar:na]
    	at com.netflix.hollow.api.producer.HollowProducer.runCycle(HollowProducer.java:438) [golftec-api-1.0-jar-with-dependencies.jar:na]
    	at com.netflix.hollow.api.producer.HollowProducer.runCycle(HollowProducer.java:390) [golftec-api-1.0-jar-with-dependencies.jar:na]
    	at com.netflix.hollow.api.producer.HollowIncrementalProducer.runCycle(HollowIncrementalProducer.java:206) [golftec-api-1.0-jar-with-dependencies.jar:na]
    	at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) ~[na:1.8.0_292]
    	at java.util.concurrent.FutureTask.run(FutureTask.java:266) ~[na:1.8.0_292]
    	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) ~[na:1.8.0_292]
    	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) ~[na:1.8.0_292]
    	at java.lang.Thread.run(Thread.java:748) ~[na:1.8.0_292]
    Caused by: java.util.concurrent.ExecutionException: java.lang.NullPointerException
    	at java.util.concurrent.FutureTask.report(FutureTask.java:122) ~[na:1.8.0_292]
    	at java.util.concurrent.FutureTask.get(FutureTask.java:192) ~[na:1.8.0_292]
    	at com.netflix.hollow.core.util.SimultaneousExecutor.awaitSuccessfulCompletion(SimultaneousExecutor.java:118) ~[golftec-api-1.0-jar-with-dependencies.jar:na]
    	at com.netflix.hollow.api.producer.HollowIncrementalCyclePopulator.addRecords(HollowIncrementalCyclePopulator.java:142) ~[golftec-api-1.0-jar-with-dependencies.jar:na]
    	... 10 common frames omitted
    Caused by: java.lang.NullPointerException: null
    	at com.netflix.hollow.core.write.objectmapper.HollowObjectTypeMapper.write(HollowObjectTypeMapper.java:170) ~[golftec-api-1.0-jar-with-dependencies.jar:na]
    	at com.netflix.hollow.core.write.objectmapper.HollowMapTypeMapper.write(HollowMapTypeMapper.java:76) ~[golftec-api-1.0-jar-with-dependencies.jar:na]
    	at com.netflix.hollow.core.write.objectmapper.HollowObjectTypeMapper$MappedField.copy(HollowObjectTypeMapper.java:470) ~[golftec-api-1.0-jar-with-dependencies.jar:na]
    	at com.netflix.hollow.core.write.objectmapper.HollowObjectTypeMapper.write(HollowObjectTypeMapper.java:176) ~[golftec-api-1.0-jar-with-dependencies.jar:na]
    	at com.netflix.hollow.core.write.objectmapper.HollowObjectMapper.add(HollowObjectMapper.java:70) ~[golftec-api-1.0-jar-with-dependencies.jar:na]
    	at com.netflix.hollow.api.producer.WriteStateImpl.add(WriteStateImpl.java:41) ~[golftec-api-1.0-jar-with-dependencies.jar:na]
    	at com.netflix.hollow.api.producer.HollowIncrementalCyclePopulator$2.run(HollowIncrementalCyclePopulator.java:136) ~[golftec-api-1.0-jar-with-dependencies.jar:na]
    	... 5 common frames omitted
    

    I got this error while writing the new state. Anyone can help me a guess on this to have more clue on investigate

    Thanks!

    opened by maiakhoa 2
  • question about this run in docker

    question about this run in docker

    our app run in docker container,so when we reboot the app ,it maybe run on different server,i think hollow will download the data from oss,and then restore those on the local server, and then load those data into the memory. is that mean ,every time I reboot the docker app, the consumer , it will download the data from oss to local server?

    opened by coderElijah 0
Releases(v7.1.8-rc.1)
Owner
Netflix, Inc.
Netflix Open Source Platform
Netflix, Inc.
Java Collections till the last breadcrumb of memory and performance

Koloboke A family of projects around collections in Java (so far). The Koloboke Collections API A carefully designed extension of the Java Collections

Roman Leventov 967 Nov 14, 2022
A high performance caching library for Java

Caffeine is a high performance, near optimal caching library. For more details, see our user's guide and browse the API docs for the latest release. C

Ben Manes 13k Jan 5, 2023
Table-Computing (Simplified as TC) is a distributed light weighted, high performance and low latency stream processing and data analysis framework. Milliseconds latency and 10+ times faster than Flink for complicated use cases.

Table-Computing Welcome to the Table-Computing GitHub. Table-Computing (Simplified as TC) is a distributed light weighted, high performance and low la

Alibaba 34 Oct 14, 2022
High Performance data structures and utility methods for Java

Agrona Agrona provides a library of data structures and utility methods that are a common need when building high-performance applications in Java. Ma

Real Logic 2.5k Jan 5, 2023
LWJGL is a Java library that enables cross-platform access to popular native APIs useful in the development of graphics (OpenGL, Vulkan), audio (OpenAL), parallel computing (OpenCL, CUDA) and XR (OpenVR, LibOVR) applications.

LWJGL - Lightweight Java Game Library 3 LWJGL (https://www.lwjgl.org) is a Java library that enables cross-platform access to popular native APIs usef

Lightweight Java Game Library 4k Dec 29, 2022
High performance Java implementation of a Cuckoo filter - Apache Licensed

Cuckoo Filter For Java This library offers a similar interface to Guava's Bloom filters. In most cases it can be used interchangeably and has addition

Mark Gunlogson 161 Dec 30, 2022
High Performance Primitive Collections for Java

HPPC: High Performance Primitive Collections Collections of primitive types (maps, sets, stacks, lists) with open internals and an API twist (no java.

Carrot Search 890 Dec 28, 2022
Simple Binary Encoding (SBE) - High Performance Message Codec

Simple Binary Encoding (SBE) SBE is an OSI layer 6 presentation for encoding and decoding binary application messages for low-latency financial applic

Real Logic 2.8k Dec 28, 2022
Library for creating In-memory circular buffers that use direct ByteBuffers to minimize GC overhead

Overview This project aims at creating a simple efficient building block for "Big Data" libraries, applications and frameworks; thing that can be used

Tatu Saloranta 132 Jul 28, 2022
Immutable in-memory R-tree and R*-tree implementations in Java with reactive api

rtree In-memory immutable 2D R-tree implementation in java using RxJava Observables for reactive processing of search results. Status: released to Mav

Dave Moten 999 Dec 20, 2022
Chronicle Bytes has a similar purpose to Java NIO's ByteBuffer with many extensions

Chronicle-Bytes Chronicle-Bytes Chronicle Bytes contains all the low level memory access wrappers. It is built on Chronicle Core’s direct memory and O

Chronicle Software : Open Source 334 Jan 1, 2023
fasttuple - Collections that are laid out adjacently in both on- and off-heap memory.

FastTuple Introduction There are lots of good things about working on the JVM, like a world class JIT, operating system threads, and a world class gar

BMC TrueSight Pulse (formerly Boundary) 137 Sep 30, 2022
External-Memory Sorting in Java

Externalsortinginjava External-Memory Sorting in Java: useful to sort very large files using multiple cores and an external-memory algorithm. The vers

Daniel Lemire 235 Dec 29, 2022
Lightning Memory Database (LMDB) for Java: a low latency, transactional, sorted, embedded, key-value store

LMDB for Java LMDB offers: Transactions (full ACID semantics) Ordered keys (enabling very fast cursor-based iteration) Memory-mapped files (enabling o

null 680 Dec 23, 2022
jproblemgenerator creates scenarios in which Java programs leak memory or crash the JVM

jproblemgenerator creates scenarios in which Java programs leak memory or crash the JVM. It is intended to train the use of debugging tools

null 1 Jan 6, 2022
A fork of Cliff Click's High Scale Library. Improved with bug fixes and a real build system.

High Scale Lib This is Boundary's fork of Cliff Click's high scale lib. We will be maintaining this fork with bug fixes, improvements and versioned bu

BMC TrueSight Pulse (formerly Boundary) 402 Jan 2, 2023
Replicate your Key Value Store across your network, with consistency, persistance and performance.

Chronicle Map Version Overview Chronicle Map is a super-fast, in-memory, non-blocking, key-value store, designed for low-latency, and/or multi-process

Chronicle Software : Open Source 2.5k Dec 29, 2022
A Java library for quickly and efficiently parsing and writing UUIDs

fast-uuid fast-uuid is a Java library for quickly and efficiently parsing and writing UUIDs. It yields the most dramatic performance gains when compar

Jon Chambers 142 Jan 1, 2023
A modern I/O library for Android, Kotlin, and Java.

Okio See the project website for documentation and APIs. Okio is a library that complements java.io and java.nio to make it much easier to access, sto

Square 8.2k Dec 31, 2022