Fast integer compression in C using the StreamVByte codec

Overview

streamvbyte

Build Status

StreamVByte is a new integer compression technique that applies SIMD instructions (vectorization) to Google's Group Varint approach. The net result is faster than other byte-oriented compression techniques.

The approach is patent-free, the code is available under the Apache License.

It includes fast differential coding.

It assumes a recent Intel processor (e.g., haswell or better) or an ARM processor with NEON instructions (which is almost all of them).

The code should build using most standard-compliant C99 compilers. The provided makefile expects a Linux-like system.

Users

This library is used by

Usage

Usage with Makefile:

  make
  ./unit

Usage with CMake:

The cmake build system also offers a libstreamvbyte_static.a in addition to libstreamvbyte.so.

-DCMAKE_INSTALL_PREFIX:PATH=/path/to/install is optional. Defaults to /usr/local{include,lib}

By default, the project builds with -march=native (except on MSVC), use -DSTREAMVBYTE_DISABLE_NATIVE=ON to disable.

mkdir build
cd build
cmake .. -DCMAKE_BUILD_TYPE=Release \
         -DCMAKE_INSTALL_PREFIX:PATH=/path/to/install \
make install

# run the tests like:
ctest -V

See example.c for an example.

Short code sample:

// suppose that datain is an array of uint32_t integers
size_t compsize = streamvbyte_encode(datain, N, compressedbuffer); // encoding
// here the result is stored in compressedbuffer using compsize bytes
streamvbyte_decode(compressedbuffer, recovdata, N); // decoding (fast)

If the values are sorted, then it might be preferable to use differential coding:

// suppose that datain is an array of uint32_t integers
size_t compsize = streamvbyte_delta_encode(datain, N, compressedbuffer,0); // encoding
// here the result is stored in compressedbuffer using compsize bytes
streamvbyte_delta_decode(compressedbuffer, recovdata, N,0); // decoding (fast)

You have to know how many integers were coded when you decompress. You can store this information along with the compressed stream.

Signed integers

We do not directly support signed integers, but you can use fast functions to convert signed integers to unsigned integers.

#include "streamvbyte_zigzag.h"

zigzag_encode(mysignedints, myunsignedints, number); // mysignedints => myunsignedints

zigzag_decode(myunsignedints, mysignedints, number); // myunsignedints => mysignedints

Installation

You can install the library (as a dynamic library) on your machine if you have root access:

  sudo make install

To uninstall, simply type:

  sudo make uninstall

It is recommended that you try make dyntest before proceeding.

Benchmarking

You can try to benchmark the speed in this manner:

  make perf
  ./perf

Make sure to run make test before, as a sanity test.

Technical posts

Alternative encoding

By default, Stream VByte uses 1, 2, 3 or 4 bytes per integer. In the case where you expect many of your integers to be zero, you might try the streamvbyte_encode_0124 and streamvbyte_decode_0124 which use 0, 1, 2, or 4 bytes per integer.

Stream VByte in other languages

Format Specification

We specify the format as follows.

We do not store how many integers (count) are compressed in the compressed data per se. If you want to store the data stream (e.g., to disk), you need to add this information. It is intentionally left out because, in applications, it is often the case that there are better ways to store this count.

There are two streams:

  • The data starts with an array of "control bytes". There are (count + 3) / 4 of them.
  • Following the array of control bytes, there are data bytes.

We can interpret the control bytes as a sequence of 2-bit words. The first 2-bit word is made of the least significant 2 bits in the first byte, and so forth. There are four 2-bit words written in each byte.

Starting from the first 2-bit word, we have corresponding sequence in the data bytes, written in sequence from the beginning:

  • When the 2-bit word is 00, there is a single data byte.
  • When the 2-bit words is 01, there are two data bytes.
  • When the 2-bit words is 10, there are three data bytes.
  • When the 2-bit words is 11, there are four data bytes.

The data bytes are stored using a little-endian encoding.

Consider the following example:

control bytes: [0x40 0x55 ... ]
data bytes: [0x00 0x64 0xc8 0x2c 0x01 0x90  0x01 0xf4 0x01 0x58 0x02 0xbc 0x02 ...]

The first control byte is 0x40 or the four 2-bit words : 00 00 00 01. The second control byte is 0x55 or the four 2-bit words : 01 01 01 01. Thus the first three values are given by the first three bytes: 0x00, 0x64, 0xc8 (or 0, 100, 200 in base 10). The five next values are stored using two bytes each: 0x2c 0x01, 0x90 0x01, 0xf4 0x01, 0x58 0x02, 0xbc 0x02. As little endian integers, these are to be interpreted as 300, 400, 500, 600, 700.

Thus, to recap, the sequence of integers (0,100,200,300,400,500,600,700) gets encoded as the 15 bytes 0x40 0x55 0x00 0x64 0xc8 0x2c 0x01 0x90 0x01 0xf4 0x01 0x58 0x02 0xbc 0x02.

If the countis not divisible by four, then we include a final partial group where we use zero 2-bit corresponding to no data byte.

Reference

See also

Comments
  • Endianness

    Endianness

    From Fig 3 in https://arxiv.org/pdf/1709.08990.pdf, it looks like the data layout is intended to be big-endian. However, in the test data from https://bitbucket.org/marshallpierce/stream-vbyte-rust/commits/ad95ed76e271a10c0c0bb57e23800a4e23d606e9 encoding 0, 100, 200, 300, ..., we have the following hex (format from hexdump -C):

    00000000  40 55 55 55 55 55 55 55  55 55 55 55 55 55 55 55  |@UUUUUUUUUUUUUUU|
    00000010  55 55 55 55 55 55 55 55  55 55 55 55 55 55 55 55  |UUUUUUUUUUUUUUUU|
    

    Since the first four numbers are 0, 100, 200, 300 taking 1, 1, 1, 2 bytes respectively, based on the figure's diagram of control bits to encoded ints I would expect the first control byte to be 0b00000001 = 0x01, not 0b01000000 = 0x40.

    There are 1250 = 0x4E2 control bytes, so looking at where the encoded numbers start, we see:

    000004e0  aa aa 00 64 c8 2c 01 90  01 f4 01 58 02 bc 02 20  |...d.,.....X... |
    

    0 = 0x00, 100 = 0x64, 200 = 0xC8 are single bytes of course, but 300 = 0x012C in big endian, and the sample data has 0x2C01.

    Of course, little-endian is just as valid a choice as big-endian! Am I misinterpreting the paper? Should I be letting the user choose which endianness to expect?

    opened by marshallpierce 12
  • Vector encoder

    Vector encoder

    SSE vector-based encoder. Encodes a quad of uint's at a time.

    This is for non-Delta only. The vector encoder is added to streamvbyte.c, and shuffle tables are moved to an include due to bulk.

    opened by KWillets 11
  • Compression uint32_t stream with lots of zeroes

    Compression uint32_t stream with lots of zeroes

    I am trying to use streamvbyte in an in-house archiving software which we'll hopefully be able to publish as open source at some point. Your library is a great match for my use-case with it's brilliant performance and compression that's good enough.

    There's a catch though.

    My stream of uint32_t's looks like the following: 234, 566, 0, 0, 333, 0, 0, 0, 1578987, 0, 234, 444, <a few million uint32_t's more>. Notice that there are lots of zeroes, about 30-40% of the stream. Zero distribution is highly unpredictable, and I know that zero run length is probably gonna be about 2-3 zeroes max.

    But streamvbyte can only use 1,2,3,4 bytes per integer, depending on the value. I calculated that for my use case it's much more reasonable to have something like 0,1,2,4, i.e.:

    1. I don't want to include zeroes in the stream.
    2. I don't really need 3 bytes values.

    This would mean that I would only have to keep about 2 bits per zero value.

    I am going through your related papers - which are very readable! - and the code and it seems to me that it should be possible to just patch streamvbyte to match my needs.

    So here's a question:

    1. Is there anything that I don't understand and that might become a problem here?
    2. I'll have 3-5 full time days to solve the issue. Is there a way I can solve the issue and contribute back some code?

    Thank you!

    opened by vkazanov 8
  • Unroll the encoder?

    Unroll the encoder?

    In the current version, empty space fills 75% of the xmm register, during some operations. Unrolling 4x then packing ASAP, seems to get a ~9% improvement.

    https://gist.github.com/aqrit/8fb615a05586d023e07cbd997cfcb6f9

    food for thought.

    help wanted performance 
    opened by aqrit 7
  • Shared oibject version

    Shared oibject version

    The cmake build is not producing libstreamvbyte.so.0.0.1, which I see is built by the mnimal Makefile. Do you see the same on your end or is there maybe something amiss in my cmake configuration?

    opened by outpaddling 6
  • Add CMakeLists.txt to streamvbyte

    Add CMakeLists.txt to streamvbyte

    Currently, there is no option to install other than to install the library, headers in a global scope. This doesn't play nice with other build systems.

    With this file a user can simply do:

    mkdir build
    cd build
    cmake .. -DCMAKE_BUILD_TYPE=Release \
             -DCMAKE_INSTALL_PREFIX:PATH=/path/to/install
    make -j8 install
    
    

    In addition, I added tests/unit to the unit test framework of cmake so that you can do ctest -V and you would get:

    Test project /home/agallego/workspace/streamvbyte/build
    Constructing a list of tests
    Done constructing a list of tests
    Updating test list for fixtures
    Added 0 tests to meet fixture requirements
    Checking test dependency graph...
    Checking test dependency graph end
    test 1
        Start 1: unit
    
    1: Test command: /home/agallego/workspace/streamvbyte/build/unit
    1: Test timeout computed to be: 9.99988e+06
    1: Code looks good.
    1: And you have a little endian architecture.
    1: Warning: you tested non-vectorized code.
    1/1 Test #1: unit .............................   Passed    0.02 sec
    
    100% tests passed, 0 tests failed out of 1
    
    Total Test time (real) =   0.02 sec
    
    opened by emaxerrno 6
  • decode without length table

    decode without length table

    This works on clang-4.0, but the pextrb intrinsic throws errors on gcc:

    /usr/lib/gcc/x86_64-linux-gnu/6/include/smmintrin.h:443:27: error: selector must be an integer constant in the range 0..15 return (unsigned char) __builtin_ia32_vec_ext_v16qi ((__v16qi)__X, __N); ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ I believe this is simply wrong, as the arg is a const int, but not a constant.

    Performance on clang is exactly the same:

    kendall@skylake:~/streamvbyte$ ./perf time = 0.036000 1388888960.000000 uints/sec compsize=1281422 compsize2 = 1281422 Compressed 500000 integers down to 1281422 bytes.

    opened by KWillets 6
  • Fixes and verifies issue 42

    Fixes and verifies issue 42

    We decoding, our functions may read up to 16 extra bytes from the input (beyond the actual compressed data). Thus the users of this library, for safety, should ensure that there is allocated data 16 bytes beyond the compressed data.

    opened by lemire 5
  • Adds optional function to compute required memory

    Adds optional function to compute required memory

    For #32 - please read for context. Sometimes it's better to trade off encoding runtime for reduced peak memory allocation.

    Fixes https://github.com/lemire/streamvbyte/issues/32

    opened by daniel-j-h 5
  • Illegal instruction (core dumped)

    Illegal instruction (core dumped)

    OS: Ubuntu 20.04.5 LTS CPU: Intel Xeon X5660

    Step to reproduce:

    git clone [email protected]:lemire/streamvbyte.git
    mkdir build && cd build
    cmake ..
    cmake --build .
    

    Output cmake ..

    -- The C compiler identification is GNU 9.4.0
    -- The CXX compiler identification is GNU 9.4.0
    -- Check for working C compiler: /usr/bin/cc
    -- Check for working C compiler: /usr/bin/cc -- works
    -- Detecting C compiler ABI info
    -- Detecting C compiler ABI info - done
    -- Detecting C compile features
    -- Detecting C compile features - done
    -- Check for working CXX compiler: /usr/bin/c++
    -- Check for working CXX compiler: /usr/bin/c++ -- works
    -- Detecting CXX compiler ABI info
    -- Detecting CXX compiler ABI info - done
    -- Detecting CXX compile features
    -- Detecting CXX compile features - done
    -- No build type selected
    -- Default to Release
    -- CMAKE_SYSTEM_PROCESSOR: x86_64
    -- CMAKE_BUILD_TYPE: Release
    -- CMAKE_C_COMPILER: /usr/bin/cc
    -- CMAKE_C_FLAGS: 
    -- CMAKE_C_FLAGS_DEBUG: -g
    -- CMAKE_C_FLAGS_RELEASE: -O3 -DNDEBUG
    -- Configuring done
    -- Generating done
    -- Build files have been written to: /home/kataev/development/streamvbyte/build
    

    Output ctest -V

    UpdateCTestConfiguration  from :/home/kataev/development/streamvbyte/build/DartConfiguration.tcl
    UpdateCTestConfiguration  from :/home/kataev/development/streamvbyte/build/DartConfiguration.tcl
    Test project /home/kataev/development/streamvbyte/build
    Constructing a list of tests
    Done constructing a list of tests
    Updating test list for fixtures
    Added 0 tests to meet fixture requirements
    Checking test dependency graph...
    Checking test dependency graph end
    test 1
        Start 1: unit
    
    1: Test command: /home/kataev/development/streamvbyte/build/unit
    1: Test timeout computed to be: 10000000
    1/1 Test #1: unit .............................***Exception: Illegal  0.28 sec
    
    0% tests passed, 1 tests failed out of 1
    
    Total Test time (real) =   0.29 sec
    
    The following tests FAILED:
    	  1 - unit (ILLEGAL)
    Errors while running CTest
    

    Output ./perf

    Illegal instruction (core dumped)
    
    opened by victor1234 3
  • CMake: allow parent project to disable unit tests for streamvbyte

    CMake: allow parent project to disable unit tests for streamvbyte

    While integrating into another cmake project via add_subdirectory() the target check might already be defined by a parent directory.

    In addition, on a parent project when you run its unit tests you don't want child project unit tests to run

    For example, this won't build the unit tests:

    cmake -DSTREAMVBYTE_ENABLE_TESTS=OFF .. 
    
    
    opened by emaxerrno 3
  • Add CodeQL workflow for GitHub code scanning

    Add CodeQL workflow for GitHub code scanning

    Hi lemire/streamvbyte!

    This is a one-off automatically generated pull request from LGTM.com :robot:. You might have heard that we’ve integrated LGTM’s underlying CodeQL analysis engine natively into GitHub. The result is GitHub code scanning!

    With LGTM fully integrated into code scanning, we are focused on improving CodeQL within the native GitHub code scanning experience. In order to take advantage of current and future improvements to our analysis capabilities, we suggest you enable code scanning on your repository. Please take a look at our blog post for more information.

    This pull request enables code scanning by adding an auto-generated codeql.yml workflow file for GitHub Actions to your repository — take a look! We tested it before opening this pull request, so all should be working :heavy_check_mark:. In fact, you might already have seen some alerts appear on this pull request!

    Where needed and if possible, we’ve adjusted the configuration to the needs of your particular repository. But of course, you should feel free to tweak it further! Check this page for detailed documentation.

    Questions? Check out the FAQ below!

    FAQ

    Click here to expand the FAQ section

    How often will the code scanning analysis run?

    By default, code scanning will trigger a scan with the CodeQL engine on the following events:

    • On every pull request — to flag up potential security problems for you to investigate before merging a PR.
    • On every push to your default branch and other protected branches — this keeps the analysis results on your repository’s Security tab up to date.
    • Once a week at a fixed time — to make sure you benefit from the latest updated security analysis even when no code was committed or PRs were opened.

    What will this cost?

    Nothing! The CodeQL engine will run inside GitHub Actions, making use of your unlimited free compute minutes for public repositories.

    What types of problems does CodeQL find?

    The CodeQL engine that powers GitHub code scanning is the exact same engine that powers LGTM.com. The exact set of rules has been tweaked slightly, but you should see almost exactly the same types of alerts as you were used to on LGTM.com: we’ve enabled the security-and-quality query suite for you.

    How do I upgrade my CodeQL engine?

    No need! New versions of the CodeQL analysis are constantly deployed on GitHub.com; your repository will automatically benefit from the most recently released version.

    The analysis doesn’t seem to be working

    If you get an error in GitHub Actions that indicates that CodeQL wasn’t able to analyze your code, please follow the instructions here to debug the analysis.

    How do I disable LGTM.com?

    If you have LGTM’s automatic pull request analysis enabled, then you can follow these steps to disable the LGTM pull request analysis. You don’t actually need to remove your repository from LGTM.com; it will automatically be removed in the next few months as part of the deprecation of LGTM.com (more info here).

    Which source code hosting platforms does code scanning support?

    GitHub code scanning is deeply integrated within GitHub itself. If you’d like to scan source code that is hosted elsewhere, we suggest that you create a mirror of that code on GitHub.

    How do I know this PR is legitimate?

    This PR is filed by the official LGTM.com GitHub App, in line with the deprecation timeline that was announced on the official GitHub Blog. The proposed GitHub Action workflow uses the official open source GitHub CodeQL Action. If you have any other questions or concerns, please join the discussion here in the official GitHub community!

    I have another question / how do I get in touch?

    Please join the discussion here to ask further questions and send us suggestions!

    opened by lgtm-com[bot] 0
  • Support for Signed Types Integrated in the CODECs

    Support for Signed Types Integrated in the CODECs

    I have an interest in using streamvbyte with signed types (specifically int16) and have created a Python wrapper which supports this however the overhead is quite large as you would imagine https://github.com/iiSeymour/pystreamvbyte/issues/1.

    Native support for an efficient zigzag encoding/decoding for int16 and int32 would be great. I imagine this is something that would vectorise well?

    opened by iiSeymour 39
  • Better integrate the 0,1,2,4 bytes mode

    Better integrate the 0,1,2,4 bytes mode

    Following this PR https://github.com/lemire/streamvbyte/pull/26 we now have code that can use a 0,1,2,4 byte encoding. However, it is basically achieved through pure code duplication. Worse: it does not benefit from @aqrit 's latest improvements.

    Obviously, we could do better.

    enhancement help wanted 
    opened by lemire 1
  • Port differential coded version to ARM NEON

    Port differential coded version to ARM NEON

    The generic codec supports both x64 and ARM NEON, however the differential-encoded version is x64 only.

    It seems like it would be easy to port them over. The Delta function in ARM is almost identical:

    uint32x4_t Delta(uint32x4_t curr, uint32x4_t prev) {
       return vsubq_u32(curr, vextq_u32 (prev,curr,3));
    }
    

    And so is the prefix sum which is currently mixed with the store in _write_avx_d1 (for historical reasons I suppose)...

    uint32x4_t PrefixSum(uint32x4_t curr, uint32x4_t prev) {
       uint32x4_t zero = {0, 0, 0, 0};
       uint32x4_t add = vextq_u32 (zero, curr, 3);
       uint8x16_t BroadcastLast = {12,13,14,15,12,13,14,15,12,13,14,15,12,13,14,15};
       prev = vreinterpretq_u32_u8(vqtbl1q_u8(vreinterpretq_u8_u32(prev),BroadcastLast));
       curr = vaddq_u32(curr,add);
       add = vextq_u32 (zero, curr, 2);
       curr = vaddq_u32(curr,prev);
       curr = vaddq_u32(curr,add);
       return curr;
    }
    

    It could be that my implementations are suboptimal, but I think that they are correct and given these functions it should be easy to create a differentially coded codec.

    enhancement help wanted 
    opened by lemire 0
  • Reduce the size of the lookup tables

    Reduce the size of the lookup tables

    The current lookup tables are quite large. Finding a way to substantially reduce their memory usage without adversally affecting performance would be a worthy goal.

    help wanted 
    opened by lemire 17
  • Compute quickly the byte lengths without look-ups

    Compute quickly the byte lengths without look-ups

    Some look-ups could be efficiently replaced by fast instructions such as a pdep followed by a multiplication and a shift. It is unlikely to be generally faster than a look-up, but it might be worth exploring.

    opened by lemire 8
Releases(v0.5.1)
  • v0.5.1(Aug 6, 2022)

    What's Changed

    • Fixes and verifies issue 42 by @lemire in https://github.com/lemire/streamvbyte/pull/44

    Full Changelog: https://github.com/lemire/streamvbyte/compare/v0.5.0...v0.5.1

    Source code(tar.gz)
    Source code(zip)
  • v0.5.0(Aug 6, 2022)

    What's Changed

    • Turn on position independent code by @iiSeymour in https://github.com/lemire/streamvbyte/pull/30
    • Msvc 2017 build by @jorj1988 in https://github.com/lemire/streamvbyte/pull/31
    • Adds optional function to compute required memory by @daniel-j-h in https://github.com/lemire/streamvbyte/pull/33
    • Adds function to compute required memory fr the 0124 scheme, see #32 by @daniel-j-h in https://github.com/lemire/streamvbyte/pull/34
    • Adding undef. tests. by @lemire in https://github.com/lemire/streamvbyte/pull/36
    • runtime dispatch by @lemire in https://github.com/lemire/streamvbyte/pull/37
    • complete runtime dispatch by @lemire in https://github.com/lemire/streamvbyte/pull/38
    • Minor tweak for visual studio by @lemire in https://github.com/lemire/streamvbyte/pull/39
    • Improves the runtime dispatching. by @lemire in https://github.com/lemire/streamvbyte/pull/43

    New Contributors

    • @jorj1988 made their first contribution in https://github.com/lemire/streamvbyte/pull/31
    • @daniel-j-h made their first contribution in https://github.com/lemire/streamvbyte/pull/33
    • @lemire made their first contribution in https://github.com/lemire/streamvbyte/pull/36

    Full Changelog: https://github.com/lemire/streamvbyte/compare/v0.4.1...v0.5.0

    Source code(tar.gz)
    Source code(zip)
Owner
Daniel Lemire
Daniel Lemire is a computer science professor. His research is focused on software performance and indexing.
Daniel Lemire
Simple Binary Encoding (SBE) - High Performance Message Codec

Simple Binary Encoding (SBE) SBE is an OSI layer 6 presentation for encoding and decoding binary application messages for low-latency financial applic

Real Logic 2.8k Dec 28, 2022
Attempts to crack the "compression puzzle".

The Compression Puzzle One lovely Friday we were faced with this nice yet intriguing programming puzzle. One shall write a program that compresses str

Oto Brglez 14 Dec 29, 2022
Immutable key/value store with efficient space utilization and fast reads. They are ideal for the use-case of tables built by batch processes and shipped to multiple servers.

Minimal Perfect Hash Tables About Minimal Perfect Hash Tables are an immutable key/value store with efficient space utilization and fast reads. They a

Indeed Engineering 92 Nov 22, 2022
RTree2D is a 2D immutable R-tree with STR (Sort-Tile-Recursive) packing for ultra-fast nearest and intersection queries

RTree2D RTree2D is a 2D immutable R-tree with STR (Sort-Tile-Recursive) packing for ultra-fast nearest and intersection queries. Goals Main our requir

Andriy Plokhotnyuk 121 Dec 14, 2022
A fast object pool for the JVM

Stormpot Stormpot is an object pooling library for Java. Use it to recycle objects that are expensive to create. The library will take care of creatin

Chris Vest 302 Nov 14, 2022
Fast campus 강의 '현실 세상의 TDD' 실습에 사용된 예제 코드를 제공합니다.

현실 세상의 TDD 실습 코드 Fast campus 강의 '현실 세상의 TDD' 실습에 사용된 예제 코드를 제공합니다. 예제 코드는 강의 촬영 전에 미리 준비되었고 강의 촬영 시 라이브 코딩이 진행되었기 때문에 세부 코드는 강의 영상에서 보는 것과 다를 수 있습니다.

Gyuwon Yi 170 Jan 2, 2023
Simple, fast Key-Value storage. Inspired by HaloDB

Phantom Introduction Phantom is an embedded key-value store, provides extreme high write throughput while maintains low latency data access. Phantom w

null 11 Apr 14, 2022
RxJava – Reactive Extensions for the JVM – a library for composing asynchronous and event-based programs using observable sequences for the Java VM.

RxJava: Reactive Extensions for the JVM RxJava is a Java VM implementation of Reactive Extensions: a library for composing asynchronous and event-base

ReactiveX 46.7k Dec 30, 2022
A simple integer compression library in Java

JavaFastPFOR: A simple integer compression library in Java License This code is released under the Apache License Version 2.0 http://www.apache.org/li

Daniel Lemire 487 Dec 30, 2022
Simple Binary Encoding (SBE) - High Performance Message Codec

Simple Binary Encoding (SBE) SBE is an OSI layer 6 presentation for encoding and decoding binary application messages for low-latency financial applic

Real Logic 2.8k Jan 3, 2023
Simple Binary Encoding (SBE) - High Performance Message Codec

Simple Binary Encoding (SBE) SBE is an OSI layer 6 presentation for encoding and decoding binary application messages for low-latency financial applic

Real Logic 2.8k Dec 28, 2022
Schreibe die statische Methode intArrayMinimum so, dass sie den kleinsten im übergebenen Array enthaltenen Integer-Wert zurückgibt.

In dieser Aufgabe wiederholst du an einem einfachen Beispiel die Verarbeitung von Arrays. Konkret geht es darum den kleinsten Wert in einem Array zu f

null 2 Dec 23, 2021
Luban—Image compression with efficiency very close to WeChat Moments

Luban—Image compression with efficiency very close to WeChat Moments

郑梓斌 13.1k Dec 29, 2022
Attempts to crack the "compression puzzle".

The Compression Puzzle One lovely Friday we were faced with this nice yet intriguing programming puzzle. One shall write a program that compresses str

Oto Brglez 14 Dec 29, 2022
Fast and Easy mapping from database and csv to POJO. A java micro ORM, lightweight alternative to iBatis and Hibernate. Fast Csv Parser and Csv Mapper

Simple Flat Mapper Release Notes Getting Started Docs Building it The build is using Maven. git clone https://github.com/arnaudroger/SimpleFlatMapper.

Arnaud Roger 418 Dec 17, 2022
Discord4J is a fast, powerful, unopinionated, reactive library to enable quick and easy development of Discord bots for Java, Kotlin, and other JVM languages using the official Discord Bot API.

Discord4J is a fast, powerful, unopinionated, reactive library to enable quick and easy development of Discord bots for Java, Kotlin, and other JVM languages using the official Discord Bot API.

null 1.5k Jan 4, 2023
🔌 Simple library to manipulate HTTP requests/responses and capture network logs made by the browser using selenium tests without using any proxies

Simple library to manipulate HTTP requests and responses, capture the network logs made by the browser using selenium tests without using any proxies

Sudharsan Selvaraj 29 Oct 23, 2022
Numerical-methods-using-java - Source Code for 'Numerical Methods Using Java' by Haksun Li

Apress Source Code This repository accompanies Numerical Methods Using Java by Haksun Li (Apress, 2022). Download the files as a zip using the green b

Apress 5 Nov 20, 2022
A sidecar to run alongside Trino to gather metrics using the JMX connector and expose them in different formats using Apache velocity

Overview A sidecar to run alongside Trino to gather metrics using the JMX connector and expose them in different formats using Apache Velocity. Click

BlueCat Engineering 4 Nov 18, 2021
Search API with spelling correction using ngram-index algorithm: implementation using Java Spring-boot and MySQL ngram full text search index

Search API to handle Spelling-Corrections Based on N-gram index algorithm: using MySQL Ngram Full-Text Parser Sample Screen-Recording Screen.Recording

Hardik Singh Behl 5 Dec 4, 2021