A simple integer compression library in Java

Overview

JavaFastPFOR: A simple integer compression library in Java

Build Status docs-badge Coverage Status Code Quality: Cpp

License

This code is released under the Apache License Version 2.0 http://www.apache.org/licenses/.

What does this do?

It is a library to compress and uncompress arrays of integers very fast. The assumption is that most (but not all) values in your array use much less than 32 bits, or that the gaps between the integers use much less than 32 bits. These sort of arrays often come up when using differential coding in databases and information retrieval (e.g., in inverted indexes or column stores).

Please note that random integers are not compressible, by this library or by any other means. If you ever had the means of systematically compressing random integers, you could compress any data source to nothing, by recursive application of your technique.

This library can decompress integers at a rate of over 1.2 billions per second (4.5 GB/s). It is significantly faster than generic codecs (such as Snappy, LZ4 and so on) when compressing arrays of integers.

The library is used in LinkedIn Pinot, a realtime distributed OLAP datastore. Part of this library has been integrated in Parquet (http://parquet.io/). A modified version of the library is included in the search engine Terrier (http://terrier.org/). This libary is used by ClueWeb Tools (https://github.com/lintool/clueweb). It is also used by Apache NiFi.

This library inspired a compression scheme used by Apache Lucene and Apache Lucene.NET (e.g., see http://lucene.apache.org/core/4_6_1/core/org/apache/lucene/util/PForDeltaDocIdSet.html ).

It is a java port of the fastpfor C++ library (https://github.com/lemire/FastPFor). There is also a Go port (https://github.com/reducedb/encoding). The C++ library is used by the zsearch engine (http://victorparmar.github.com/zsearch/) as well as in GMAP and GSNAP (http://research-pub.gene.com/gmap/).

Usage

Really simple usage:

        IntegratedIntCompressor iic = new IntegratedIntCompressor();
        int[] data = ... ; // to be compressed
        int[] compressed = iic.compress(data); // compressed array
        int[] recov = iic.uncompress(compressed); // equals to data

For more examples, see example.java or the examples folder.

JavaFastPFOR supports compressing and uncompressing data in chunks (e.g., see advancedExample in https://github.com/lemire/JavaFastPFOR/blob/master/example.java).

Some CODECs ("integrated codecs") assume that the integers are in sorted orders and use differential coding (they compress deltas). They can be found in the package me.lemire.integercompression.differential. Most others do not.

Maven central repository

Using this code in your own project is easy with maven, just add the following code in your pom.xml file:

<dependencies>
     <dependency>
     <groupId>me.lemire.integercompression</groupId>
     <artifactId>JavaFastPFOR</artifactId>
     <version>[0.1,)</version>
     </dependency>
 </dependencies>

Naturally, you should replace "version" by the version you desire.

You can also download JavaFastPFOR from the Maven central repository: http://repo1.maven.org/maven2/me/lemire/integercompression/JavaFastPFOR/

Why?

We found no library that implemented state-of-the-art integer coding techniques such as Binary Packing, NewPFD, OptPFD, Variable Byte, Simple 9 and so on in Java. We wrote one.

Thread safety

Some codecs are thread-safe while others are not. For this reason, it is best to use one codec per thread. The memory usage of a codec instance is small in any case.

Nevertheless, if you want to reuse codec instances, note that by convention, unless the documentation of a codec specify that it is not thread-safe, then it can be assumed to be thread-safe.

Authors

Main contributors

with contributions by

How does it compare to the Kamikaze PForDelta library?

In our tests, Kamikaze PForDelta is slower than our implementations. See the benchmarkresults directory for some results.

https://github.com/lemire/JavaFastPFOR/blob/master/benchmarkresults/benchmarkresults_icore7_10may2013.txt

Reference: http://sna-projects.com/kamikaze/

Requirements

A recent Java compiler. Java 7 or better is recommended.

Good instructions on installing Java 7 on Linux:

http://forums.linuxmint.com/viewtopic.php?f=42&t=93052

How fast is it?

Compile the code and execute me.lemire.integercompression.benchmarktools.Benchmark.

I recommend running all the benchmarks with the "-server" flag on a desktop machine.

Speed is always reported in millions of integers per second.

For Maven users

mvn compile

mvn exec:java

For ant users

If you use Apache ant, please try this:

$ ant Benchmark

or:

$ ant Benchmark -Dbenchmark.target=BenchmarkBitPacking

API Documentation

http://www.javadoc.io/doc/me.lemire.integercompression/JavaFastPFOR/

Want to read more?

This library was a key ingredient in the best paper at ECIR 2014 :

Matteo Catena, Craig Macdonald, Iadh Ounis, On Inverted Index Compression for Search Engine Efficiency, Lecture Notes in Computer Science 8416 (ECIR 2014), 2014. http://dx.doi.org/10.1007/978-3-319-06028-6_30

We wrote several research papers documenting many of the CODECs implemented here:

Ikhtear Sharif wrote his M.Sc. thesis on this library:

Ikhtear Sharif, Performance Evaluation of Fast Integer Compression Techniques Over Tables, M.Sc. thesis, UNB 2013. http://lemire.me/fr/documents/thesis/IkhtearThesis.pdf

He also posted his slides online: http://www.slideshare.net/ikhtearSharif/ikhtear-defense

Other recommended libraries

Funding

This work was supported by NSERC grant number 26143.

Comments
  • ArrayIndexOutOfBoundsException

    ArrayIndexOutOfBoundsException

    I'm using the same codec as in the basic example. I can encode one array, but with my other array, I get this exception:

    Exception in thread "main" java.lang.ArrayIndexOutOfBoundsException
        at java.lang.System.arraycopy(Native Method)
        at integercompression.IntegratedBitPacking.integratedpack32(IntegratedBitPacking.java:4694)
        at integercompression.IntegratedBitPacking.integratedpack(IntegratedBitPacking.java:4915)
        at integercompression.IntegratedBinaryPacking.compress(IntegratedBinaryPacking.java:24)
        at integercompression.IntegratedComposition.compress(IntegratedComposition.java:20)
    
    opened by tom-adsfund 14
  • Delta zigzag encoding binary packing

    Delta zigzag encoding binary packing

    I have implemented DeltaZigzagBinaryPacking as an experiment. Zigzag encoding is from protocol buffer.

    It has good (=small) "bit per int" for some kind of input sequences: sensors ouput or so. But It is bit slower than other BinaryPacking codecs, becaue of I didn't optimize it.

    Benchmark's partial result is here:

    | Dataset | CODEC | Bits per int | Compress speed (MiS) | Decompress speed (MiS) | | --- | --- | --- | --- | --- | | Sorted (mean=1048576 range=1024) | JustCopy | 32.00 | 2449 | 8921 | | Sorted (mean=1048576 range=1024) | BinaryPacking | 20.79 | 975 | 1458 | | Sorted (mean=1048576 range=1024) | DeltaZigzagBinaryPacking | 4.07 | 513 | 693 | | Sorted (mean=1048576 range=1024) | IntegratedBinaryPacking | 3.07 | 585 | 1233 | | Sorted (mean=1048576 range=1024) | XorBinaryPacking | 7.34 | 454 | 785 | | Sorted (mean=1048576 range=1024) | FastPFOR | 20.78 | 322 | 1185 | | Random (mean=1048576 range=1024) | JustCopy | 32.00 | 2498 | 8677 | | Random (mean=1048576 range=1024) | BinaryPacking | 21.28 | 981 | 1439 | | Random (mean=1048576 range=1024) | DeltaZigzagBinaryPacking | 11.54 | 515 | 667 | | Random (mean=1048576 range=1024) | IntegratedBinaryPacking | 32.28 | 940 | 2695 | | Random (mean=1048576 range=1024) | XorBinaryPacking | 21.28 | 443 | 817 | | Random (mean=1048576 range=1024) | FastPFOR | 21.23 | 356 | 1218 | | Sine (mean=1048576 range=1024 freq=1024) | JustCopy | 32.00 | 3574 | 9049 | | Sine (mean=1048576 range=1024 freq=1024) | BinaryPacking | 20.88 | 968 | 1308 | | Sine (mean=1048576 range=1024 freq=1024) | DeltaZigzagBinaryPacking | 6.24 | 544 | 638 | | Sine (mean=1048576 range=1024 freq=1024) | IntegratedBinaryPacking | 21.28 | 777 | 1608 | | Sine (mean=1048576 range=1024 freq=1024) | XorBinaryPacking | 12.20 | 455 | 734 | | Sine (mean=1048576 range=1024 freq=1024) | FastPFOR | 21.11 | 332 | 1101 | | Sine (mean=1048576 range=64 freq=64) | JustCopy | 32.00 | 2318 | 8507 | | Sine (mean=1048576 range=64 freq=64) | BinaryPacking | 20.88 | 928 | 1296 | | Sine (mean=1048576 range=64 freq=64) | DeltaZigzagBinaryPacking | 2.82 | 498 | 709 | | Sine (mean=1048576 range=64 freq=64) | IntegratedBinaryPacking | 19.38 | 747 | 1706 | | Sine (mean=1048576 range=64 freq=64) | XorBinaryPacking | 9.08 | 446 | 781 | | Sine (mean=1048576 range=64 freq=64) | FastPFOR | 21.11 | 331 | 1114 |

    @lemire How do you think about this DeltaZigzagBinaryPacking? Does it seem to merge into master?

    question 
    opened by koron 13
  • Not reconstructing correctly

    Not reconstructing correctly

    I changed the start of varyingLengthTest in BasicTest to:

                int N = 128;
                int[] data = new int[N];
                data[127] = -1;
    

    Then the test fails. Should this work?

    opened by Stivo 12
  • unsortedExample (example.java) with negative numbers fails

    unsortedExample (example.java) with negative numbers fails

    When I try out the unsortedExample I get an out of bound exception in getBestBFromData. This can be resolved by increasing the freqs array length to 33. But after decompressing the result is not the same as the input. Issue can be reproduced by adding after the initialization of the data array in the unsortedExample method from example.java:

    data[5] = -311;
    
    opened by samuelbosch 9
  • Make direct buffer allocation optional

    Make direct buffer allocation optional

    This is a minor refactoring of all codecs that directly allocate Java ByteBuffers: instead of making a direct call to ByteBuffer#allocate(), the codec invokes a protected method makeBuffer(size). This allows a user to subclass the codec and override the method to customize the buffer allocation.

    In particular, this enables users to use heap-buffers and/or buffer pooling. The latter is essential for reducing memory churn.

    In a JVM benchmark (not included) on some real-world data, this refactoring did not decrease performance. (Notice that dropping final from some classes does not stop the JVM from inlining, such as in monomorphic callsites.)

    opened by balthz 8
  • ArrayIndexOutofBounds with SkippableComposition

    ArrayIndexOutofBounds with SkippableComposition

    Hi

    Great work. I been playing with this lately and when i tried to random access the compressed values (using the SkippableComposition) its throwing up ArrayIndexOutOfBounds Exceptions.

    This approach works fine: codec.headlessUncompress(compressed,inPoso, uncompressed.length, output, outPoso, i)

    But when i tried to change the inPoso to read from a particular index, it throws up exception.

    Eg: inPoso.set(i) codec.headlessUncompress(compressed,inPoso, uncompressed.length, output, outPoso, 1)

    Am i doing something wrong here? Or is it like we cannot uncompress a particular element? Any help would be appreciated.

    opened by akhld 8
  • wrong size for the recovered integer array

    wrong size for the recovered integer array

    I believe There is mistake in setting the size of the recovered int[] to ChunkSize in the advancedExample() method in https://github.com/lemire/JavaFastPFOR/blob/master/example.java,

    int TotalSize = 2342351; // some arbitrary number
    int ChunkSize = 16384; // size of each chunk, choose a multiple of 128
    
    int[] data = new int[TotalSize];
    
    int[] recovered = new int[ChunkSize];
    
    

    If I do the following at the end:

            if(Arrays.equals(data, recovered)) {
                System.out.println("Elevation data is recovered in memory without loss");
            }
            else
                throw new RuntimeException("bug"); // could use assert
    

    It would throw me the RuntimeException, since the data array and the recovered array are different and NOT of the same size.

    https://github.com/lemire/JavaFastPFOR/blob/1d650e40f6ce3052042871161488971052c9fd32/example.java#L240-L243

    opened by mokun 5
  • Comments suggest to perform diff encoding

    Comments suggest to perform diff encoding

    • Note that this does not use differential coding: if you are working on sorted
      • lists, you must compute the deltas separately. //From file OptPFD.java

    Doing so increases the size of the output array from 58 to 104 integers.

        new IntegratedComposition(new IntegratedBinaryPacking(),
          new IntegratedVariableByte()),
        new JustCopy(),
        new VariableByte(),
        new IntegratedVariableByte(),
        new Composition(new BinaryPacking(), new VariableByte()),
        new Composition(new NewPFD(), new VariableByte()),
        new Composition(new NewPFDS16(), new VariableByte()),
        new Composition(new OptPFD(), new VariableByte()),
        new Composition(new OptPFDS9(), new VariableByte()),
        new Composition(new OptPFDS16(), new VariableByte()),
        new Composition(new FastPFOR128(), new VariableByte()),
        new Composition(new FastPFOR(), new VariableByte()),
        new Simple9(),
        new Simple16(),
        new Composition(new XorBinaryPacking(), new VariableByte()),
        new Composition(new DeltaZigzagBinaryPacking(),
          new DeltaZigzagVariableByte()))
    
      def test2() = {
        val N = 100
        val r = new Random(System.nanoTime())
        val keys = (1 to N).map(i => {
          //r.nextInt(Int.MaxValue)
          i_Short.MaxValue_2+r.nextInt(Short.MaxValue*2)
        }).toArray
        println(keys.toList)
        println(s"size before compression ${keys.size}")
        codecs.foreach(codec => {
          print(s"${codec.toString} , ")
          val iic = new IntegratedIntCompressor()
          var start = System.nanoTime()
          val keys2 =  diffEncoder.compress(keys)
          val compressed = iic.compress(keys2) // compressed array
          val encodeTime = System.nanoTime() - start
          print(s" $encodeTime , ")
          print(s" ${compressed.size} , ")
          start = System.nanoTime()
          val recov = iic.uncompress(compressed)
          //val reKeys = diffEncoder.decompress(recov)
          val decodeTime = System.nanoTime() - start
          println(s" $decodeTime , ")
          //println("reKeys == keys  "+(reKeys.toList==keys.toList))
          //println("reKeys == keys  "+(reKeys==keys))
    
    

    //println(compressed.toList) }) }

    
      object diffEncoder{
        def compress(in:Array[Int]):Array[Int]={
          val in1 = new Array[Int](in.length)
          var i = 0
          while(i < in.length){
            if(i == 0)
              in1(i) = in(i)
            else {
              in1(i) = in(i) - in(i-1)
            }
            i +=1
          }
          return in1
        }
    
    

    def decompress(in:Array[Int]):Array[Int]={ val in1 = new ArrayInt var i = 0 while(i < in.length){ if(i == 0) in1(i) = in(i) else { in1(i) = in(i) + in1(i-1) } i +=1 } return in1 }

    
      }```
    
    opened by kk00ss 5
  • Integrated Composition not working properly with output offset != 0

    Integrated Composition not working properly with output offset != 0

    I encountered the same issue as #29 , except that I was using Integrated Composition instead of just Composition. To reproduce the error, consider the following code:

    int x = 0;
    SkippableIntegratedIntegerCODEC sortedCodec = new SkippableIntegratedComposition(
            new IntegratedBinaryPacking(), new IntegratedVariableByte());
    
    int[] original = new int[] {1};
    int[] compressed = new int[10];
    IntWrapper outputOffset = new IntWrapper(x);
    sortedCodec.headlessCompress(original, new IntWrapper(0), original.length, compressed, outputOffset, new IntWrapper(0));
    System.out.println(Arrays.toString(compressed));
    

    The output is [0, 129, 0, 0, 0, 0, 0, 0, 0, 0], meaning that [0] has been compressed into [0, 129]. However, if we change the first line to x = 1, the output is still [0, 129, 0, 0, 0, 0, 0, 0, 0, 0], meaning that this time [0] has been compressed into [129] instead.

    A quick hack to fix the issue would be to switch the order of the two arguments being passed into SkippableIntegratedComposition(), but further experiments suggested that this would decrease performance.

    Would you mind looking into this issue? Thanks!

    opened by borisshou 4
  • Composition not working properly when output offset != 0

    Composition not working properly when output offset != 0

    Hi!

    Composition does not generalise to cases where the input of the output array is not zero. To illustrate this, the following function prints "false" for x > 0, but not for x == 0.

    public void test(int x) {
            int[] a = {2, 3, 4, 5};
            int[] b = new int[90];
            int[] c = new int[a.length];
    
            IntegerCODEC codec = new Composition(new BinaryPacking(), new VariableByte());
    
            IntWrapper aOffset = new IntWrapper(0);
            IntWrapper bOffset = new IntWrapper(x);
            codec.compress(a, aOffset, a.length, b, bOffset);
            int len = bOffset.get() - x;
    
            bOffset.set(x);
            IntWrapper cOffset = new IntWrapper(0);
            codec.uncompress(b, bOffset, len, c, cOffset);
    
            System.out.println(Arrays.equals(a, c));
    }
    

    A solution to this would simply consist in the following modification of the compress method:

        @Override
        public void compress(int[] in, IntWrapper inpos, int inlength,
                int[] out, IntWrapper outpos) {
            if (inlength == 0) {
                return;
            }
            int inposInit = inpos.get();
            int outposInit = outpos.get();
            F1.compress(in, inpos, inlength, out, outpos);
            if (outpos.get() == outposInit) {
                out[outposInit] = 0;
                outpos.increment();
            }
            inlength -= inpos.get() - inposInit;
            F2.compress(in, inpos, inlength, out, outpos);
        }
    

    Cheers

    opened by saulvargas 4
  • get byte[] from compressed int[]

    get byte[] from compressed int[]

    Hi, I'm new to java and this library in general. I want to save the compressed int[] to redis. The only way to do this is to first convert the int[] to a byte[]. Is there any way to get a byte[] array from the compressed int[] short of using ByteBuffer. Thanks, Rr.

    opened by rrphotosoft 4
  • Add CodeQL workflow for GitHub code scanning

    Add CodeQL workflow for GitHub code scanning

    Hi lemire/JavaFastPFOR!

    This is a one-off automatically generated pull request from LGTM.com :robot:. You might have heard that we’ve integrated LGTM’s underlying CodeQL analysis engine natively into GitHub. The result is GitHub code scanning!

    With LGTM fully integrated into code scanning, we are focused on improving CodeQL within the native GitHub code scanning experience. In order to take advantage of current and future improvements to our analysis capabilities, we suggest you enable code scanning on your repository. Please take a look at our blog post for more information.

    This pull request enables code scanning by adding an auto-generated codeql.yml workflow file for GitHub Actions to your repository — take a look! We tested it before opening this pull request, so all should be working :heavy_check_mark:. In fact, you might already have seen some alerts appear on this pull request!

    Where needed and if possible, we’ve adjusted the configuration to the needs of your particular repository. But of course, you should feel free to tweak it further! Check this page for detailed documentation.

    Questions? Check out the FAQ below!

    FAQ

    Click here to expand the FAQ section

    How often will the code scanning analysis run?

    By default, code scanning will trigger a scan with the CodeQL engine on the following events:

    • On every pull request — to flag up potential security problems for you to investigate before merging a PR.
    • On every push to your default branch and other protected branches — this keeps the analysis results on your repository’s Security tab up to date.
    • Once a week at a fixed time — to make sure you benefit from the latest updated security analysis even when no code was committed or PRs were opened.

    What will this cost?

    Nothing! The CodeQL engine will run inside GitHub Actions, making use of your unlimited free compute minutes for public repositories.

    What types of problems does CodeQL find?

    The CodeQL engine that powers GitHub code scanning is the exact same engine that powers LGTM.com. The exact set of rules has been tweaked slightly, but you should see almost exactly the same types of alerts as you were used to on LGTM.com: we’ve enabled the security-and-quality query suite for you.

    How do I upgrade my CodeQL engine?

    No need! New versions of the CodeQL analysis are constantly deployed on GitHub.com; your repository will automatically benefit from the most recently released version.

    The analysis doesn’t seem to be working

    If you get an error in GitHub Actions that indicates that CodeQL wasn’t able to analyze your code, please follow the instructions here to debug the analysis.

    How do I disable LGTM.com?

    If you have LGTM’s automatic pull request analysis enabled, then you can follow these steps to disable the LGTM pull request analysis. You don’t actually need to remove your repository from LGTM.com; it will automatically be removed in the next few months as part of the deprecation of LGTM.com (more info here).

    Which source code hosting platforms does code scanning support?

    GitHub code scanning is deeply integrated within GitHub itself. If you’d like to scan source code that is hosted elsewhere, we suggest that you create a mirror of that code on GitHub.

    How do I know this PR is legitimate?

    This PR is filed by the official LGTM.com GitHub App, in line with the deprecation timeline that was announced on the official GitHub Blog. The proposed GitHub Action workflow uses the official open source GitHub CodeQL Action. If you have any other questions or concerns, please join the discussion here in the official GitHub community!

    I have another question / how do I get in touch?

    Please join the discussion here to ask further questions and send us suggestions!

    opened by lgtm-com[bot] 0
  • JDK 16 Vector API

    JDK 16 Vector API

    JDK 16 introduces the Vector API for SIMD parallelism directly from Java code: https://docs.oracle.com/en/java/javase/16/docs/api/jdk.incubator.vector/jdk/incubator/vector/package-summary.html

    My understanding is that since the original FastPFOR library takes advantage of SIMD, it should be possible to make this one as well. Is that correct?

    opened by paladin8 6
  • operations on ByteBuffer/IntBuffer

    operations on ByteBuffer/IntBuffer

    Hi,

    I'm interested in decoding integers from an off-heap direct buffer, but I currently see no way of doing this without copying them to the heap. Is there a suggested method of doing this? Is there a fundamental (eg. performance) reason why this is not implemented or is it just to avoid the complexity?

    Thanks, Viktor

    opened by phraktle 2
  • Writing int[] into a file

    Writing int[] into a file

    Assume I have 10,000 int, which range from -9000 to +21000 in value.

    What would be the strategy of writing the int[] already compressed by JavaFastPFOR into a file for archive ?

    Should I assume I need to convert each int into 4 bytes ?

    So will an int[] array with 10,000 values will be saved as a byte[] array with the size of 40,000 ?

    Any other more optimal way to save the data into a file ?

    opened by mokun 1
  • Restructure

    Restructure

    I am experimenting with this library and the project panama vector API. In my fork, I have restructured the project slightly to make it easier to run benchmarks on a branch. The changes are all related to moving code around and bumping the Java language level. Would these changes be useful? If so, how could it be validated that they don't break your release process?

    opened by richardstartin 3
Owner
Daniel Lemire
Daniel Lemire is a computer science professor. His research is focused on software performance and indexing.
Daniel Lemire
Attempts to crack the "compression puzzle".

The Compression Puzzle One lovely Friday we were faced with this nice yet intriguing programming puzzle. One shall write a program that compresses str

Oto Brglez 14 Dec 29, 2022
Simple Binary Encoding (SBE) - High Performance Message Codec

Simple Binary Encoding (SBE) SBE is an OSI layer 6 presentation for encoding and decoding binary application messages for low-latency financial applic

Real Logic 2.8k Dec 28, 2022
Simple, fast Key-Value storage. Inspired by HaloDB

Phantom Introduction Phantom is an embedded key-value store, provides extreme high write throughput while maintains low latency data access. Phantom w

null 11 Apr 14, 2022
A high performance caching library for Java

Caffeine is a high performance, near optimal caching library. For more details, see our user's guide and browse the API docs for the latest release. C

Ben Manes 13k Jan 5, 2023
A Java library for quickly and efficiently parsing and writing UUIDs

fast-uuid fast-uuid is a Java library for quickly and efficiently parsing and writing UUIDs. It yields the most dramatic performance gains when compar

Jon Chambers 142 Jan 1, 2023
Hollow is a java library and toolset for disseminating in-memory datasets from a single producer to many consumers for high performance read-only access.

Hollow Hollow is a java library and toolset for disseminating in-memory datasets from a single producer to many consumers for high performance read-on

Netflix, Inc. 1.1k Dec 25, 2022
Java port of a concurrent trie hash map implementation from the Scala collections library

About This is a Java port of a concurrent trie hash map implementation from the Scala collections library. It is almost a line-by-line conversion from

null 147 Oct 31, 2022
Java library for the HyperLogLog algorithm

java-hll A Java implementation of HyperLogLog whose goal is to be storage-compatible with other similar offerings from Aggregate Knowledge. NOTE: This

Aggregate Knowledge (a Neustar service) 296 Dec 30, 2022
LWJGL is a Java library that enables cross-platform access to popular native APIs useful in the development of graphics (OpenGL, Vulkan), audio (OpenAL), parallel computing (OpenCL, CUDA) and XR (OpenVR, LibOVR) applications.

LWJGL - Lightweight Java Game Library 3 LWJGL (https://www.lwjgl.org) is a Java library that enables cross-platform access to popular native APIs usef

Lightweight Java Game Library 4k Dec 29, 2022
A modern I/O library for Android, Kotlin, and Java.

Okio See the project website for documentation and APIs. Okio is a library that complements java.io and java.nio to make it much easier to access, sto

Square 8.2k Dec 31, 2022
A Persistent Java Collections Library

PCollections A Persistent Java Collections Library Overview PCollections serves as a persistent and immutable analogue of the Java Collections Framewo

harold cooper 708 Dec 28, 2022
RxJava – Reactive Extensions for the JVM – a library for composing asynchronous and event-based programs using observable sequences for the Java VM.

RxJava: Reactive Extensions for the JVM RxJava is a Java VM implementation of Reactive Extensions: a library for composing asynchronous and event-base

ReactiveX 46.7k Dec 30, 2022
Jalgorithm is an open-source Java library which has implemented various algorithms and data structure

We loved Java and algorithms, so We made Jalgorithm ❤ Jalgorithm is an open-source Java library which has implemented various algorithms and data stru

Muhammad Karbalaee 35 Dec 15, 2022
A fork of Cliff Click's High Scale Library. Improved with bug fixes and a real build system.

High Scale Lib This is Boundary's fork of Cliff Click's high scale lib. We will be maintaining this fork with bug fixes, improvements and versioned bu

BMC TrueSight Pulse (formerly Boundary) 402 Jan 2, 2023
Library for creating In-memory circular buffers that use direct ByteBuffers to minimize GC overhead

Overview This project aims at creating a simple efficient building block for "Big Data" libraries, applications and frameworks; thing that can be used

Tatu Saloranta 132 Jul 28, 2022
Reactive Streams Utilities - Future standard utilities library for Reactive Streams.

Reactive Streams Utilities This is an exploration of what a utilities library for Reactive Streams in the JDK might look like. Glossary: A short gloss

Lightbend 61 May 27, 2021
Zero-dependency Reactive Streams publishers library

⚡️ Mutiny Zero: a zero-dependency Reactive Streams publishers library for Java Mutiny Zero is a minimal API for creating reactive-streams compliant pu

SmallRye 14 Dec 14, 2022
A Primitive Collection library that reduces memory usage and improves performance

Primitive-Collections This is a Simple Primitive Collections Library i started as a hobby Project. It is based on Java's Collection Library and FastUt

Speiger 26 Dec 25, 2022
High Performance data structures and utility methods for Java

Agrona Agrona provides a library of data structures and utility methods that are a common need when building high-performance applications in Java. Ma

Real Logic 2.5k Jan 5, 2023