Geohash utitlies in java

Overview

geo


Maven Central

Java utility methods for geohashing.

Status: production, available on Maven Central

Maven site reports are here including javadoc.

Add this to your pom:

<dependency>
    <groupId>com.github.davidmoten</groupId>
    <artifactId>geo</artifactId>
    <version>0.7.1</version>
</dependency>

Release Notes

  • 0.7 - performance improvements to GeoHash.encodeHash and others (#13), (#14), thanks @niqueco
  • 0.6.10 - compiled to java 1.6 for Android compatibility
  • 0.6.8 - get Position class from grumpy-core artifact which includes Position.longitudeDiff fix.
  • 0.6.7 - Base32.encodeBase32 now pads to max hash length which is a breaking change (#9), thanks @gnellzynga, fixed use of DEFAULT_MAX_HASHES in doco (#10).
  • 0.6.6 - fixes #8 boundary hash calculations should match geohash.org reference implementation (thanks D J Hagberg)
  • 0.6.5 - fixes issue #6 GeoHash.coverBoundingBox fails when extent is larger than that covered by a single 1 letter hash
  • 0.6 - handles neighbour calculations on borders, removed guava dependency, minor api additions
  • 0.5 - first release to Maven Central

Features

  • simple api
  • encodes geohashes from latitude, longitude to arbitrary length (GeoHash.encodeHash)
  • decodes latitude, longitude from geohashes (GeoHash.decodeHash)
  • finds adjacent hash in any direction (GeoHash.adjacentHash), works on borders including the poles too
  • finds all 8 adjacent hashes to a hash (GeoHash.neighbours)
  • calculates hash length to enclose a bounding box (GeoHash.hashLengthToCoverBoundingBox)
  • calculates geohashes of given length to cover a bounding box. Returns coverage ratio as well (GeoHash.coverBoundingBox)
  • calculates height and width of geohashes in degrees (GeoHash.heightDegrees and GeoHash.widthDegrees)
  • encodes and decodes long values from geohashes (Base32.encodeBase32 and Base32.decodeBase32)
  • good performance (~3 million GeoHash.encodeHash calls per second on an I7, single thread)
  • no mutable types exposed by api
  • threadsafe
  • 100% unit test coverage (for what that's worth of course!)
  • Apache 2.0 licence
  • Published to Maven Central

Bounding box searches using geohashing

What is the problem?

Databases of events at specific times occurring at specific places on the earth's surface are likely to be queried in terms of ranges of time and position. One such query is a bounding box query involving a time range and position constraint defined by a bounding lat-long box.

The challenge is to make your database run these queries quickly.

Some databases may either not support or suffer significant performance degradation when large datasets are queried with inequality conditions on more than one variable.

For example, a search for all ship reports within a time range and within a bounding box could be achieved with a range condition on time combined with a range condition on latitude combined with a range condition on longitude ( combined with = logical AND). This type of query can perform badly on many database types, SQL and NoSQL. On Google App Engine Datastore for instance only one variable with inequality conditions is allowed per query. This is a sensible step to take to meet scalability guarantees.

What is a solution?

The bounding box query with a time range can be rewritten using geohashes so that only one variable is subject to a range condition: time. The method is:

  • store geohashes of all lengths (depends on the indexing strategies available, a single full length hash may be enough) in indexed fields against each lat long position in the database. Note that storing hashes as a single long integer value may be advantageous (see Base32.decodeBase32 to convert a hash to a long).
  • calculate a set of geohashes that wholly covers the bounding box
  • perform the query using the time range and equality against the geohashes. For example:
(startTime <= t < finishTime) and (hash3='drt' or hash3='dr2')
  • filter the results of the query to include only those results within the bounding box

The last step is necessary because the set of geohashes contains the bounding box but may be larger than it.

What hash length to use?

So how long should the hashes be that we try to cover the bounding box with? This will depend on your aims which might be one or more of minimizing: cpu, url fetch time, financial cost, total data transferred from datastore, database load, 2nd tier load, or a heap of other possible metrics.

Calling GeoHash.coverBoundingBox with just the bounding points and no additional parameters will return hashes of a length such that the number of hashes is as many as possible but less than or equal to GeoHash.DEFAULT_MAX_HASHES (12).

You can explicitly control maxHashes by calling GeoHash.coverBoundingBoxMaxHashes.

As a quick example, for a bounding box proportioned more a less like a screen with Schenectady NY and Hartford CT in USA at the corners:

Here are the hash counts for different hash lengths:

m is the size in square degrees of the total hashed area and a is the area of the bounding box.

length  numHashes m/a    
1           1     1694   
2           1       53     
3           4        6.6    
4          30        1.6    
5         667        1.08   
6       20227        1.02   

Only testing against your database and your preferrably real life data will determine what the optimal maxHashes value is. In the benchmarks section below a test with H2 database found that optimal query time was when maxHashes is about 700. I doubt that this would be the case for many other databases.

A rigorous exploration of this topic would be fun to do or see. Let me know if you've done it or have a link and I'll update this page!

Hash height and width formulas

This is the relationship between a hash of length n and its height and width in degrees:

First define this function:

    parity(n) = 0 if n is even otherwise 1

Then

    width = 180 / 2(5n+parity(n)-2)/2 degrees

    height = 180 / 2(5n-parity(n))/2 degrees

The height and width in kilometres will be dependent on what part of the earth the hash is on and can be calculated using Position.getDistanceToKm. For example at (lat,lon):

double distancePerDegreeWidth =
     new Position(lat,lon).getDistanceToKm(new Position(lat, lon+1));

Benchmarks

Inserted 10,000,000 records into an embedded H2 filesystem database which uses B-tree indexes. The records were geographically randomly distributed across a region then a bounding box of 1/50th the area of the region was chosen. Query performed as follows (time is the time to run the query and iterate the results):

hashLength numHashes  found   from  time(s) 
2          2          200K    10m   56.0    
3          6          200k    1.2m  10.5
4          49         200k    303k   4.5
5          1128       200k    217K   3.6
none       none       200k    200k  31.1 (multiple range query)

I was pleasantly surprised that H2 allowed me to put over 1000 conditions in the where clause. I tried with the next higher hash length as well with over 22,000 hashes but H2 understandably threw a StackOverFlowError.

To run the benchmark:

mvn clean test -Dn=10000000

Running with n=1,000,000 is much quicker to run and yields the same primary result:

multiple range query is 10X slower than geohash lookup if the hash length is chosen judiciously

Links

Comments
  • Speed optimizations in encodeHash() and hashLengthToCoverBoundingBox()

    Speed optimizations in encodeHash() and hashLengthToCoverBoundingBox()

    Hi!

    I've merged bramp's speed optimization to encodeHash() and I've replaced hashLengthToCoverBoundingBox(). The latter is now 100 times faster (and creates fewer objects). What I did is just to mimc encodeHash algorithm until the point the results would differ counting the number of bits. I then return that number / 5 (rounding down).

    All tests pass and I've also created a small program comparing the results between the old hashLengthToCoverBoundingBox and the new, using random values. No differences after 400,000 samples.

    Thanks!

    opened by niqueco 9
  • Base32.encodeBase32(Base32.decodeBase32(geohash)) should be the identity function

    Base32.encodeBase32(Base32.decodeBase32(geohash)) should be the identity function

    I'm not sure whether this is an intentional design decision, but I expected that I could get back my original geohash String when I encode the decoded base32 long. This is useful for systems that wish to store a geohash as a long and later recover it as a String.

    The issue is pretty simple. Base32.encodeBase32(Base32.decodeBase32("0000")) returns "0". The encodeBase32 function needs to prepend the result with zeros to the desired length. Then the correct geohash String can be recovered.

    enhancement 
    opened by gnellzynga 5
  • Fix wrong coverage around antimeridian

    Fix wrong coverage around antimeridian

    We cannot use GeoHash.coverBoundingBox method with a bounding box around antimeridian (180/-180) because it does not allow bottomRightLon to be lower than topLeftLon (Google Maps behavior) or it does not handle correctly longitude outside [-180, 180] range (Leaflet behavior).

    This pull request addresses this use case and also fixes issues in #25 and #34.

    Now the GeoHash.coverBoundingBoxLongs method allow more longitudes values in input. For instance, these values are accepted (see unit tests):

    • topLeftLon = 156 and bottomRightLon = -118
    • topLeftLon = -204 and bottomRightLon = -121
    • topLeftLon = -703 and bottomRightLon = 624
    opened by guilhem-lk 4
  • Consider removing synchronized on GeoHash::heightDegrees

    Consider removing synchronized on GeoHash::heightDegrees

    Hi !

    We're using the GeoHash.hashContains with 2 to 4-char geohashes and are seeing that the static method heightDegrees comes up as a Hot spot during our CPU sampling with VisualVM. I noticed this method is synchronized so I was wondering if it may be a good idea to pre-calculate all MAX_HASH_LENGTH (12) values in a static way so you can remove the synchronization ?

    Is this something you may consider changing ?

    Thanks Regards

    opened by Crystark 4
  • Add overload to GeoHash.encode to include both length and max parameters

    Add overload to GeoHash.encode to include both length and max parameters

    As requested by @abrin.

    David, As I use it a bit, it might be nice to expose both "length" and “max” to the encode functions. Right now, one has to make the choice between either having a number of hashes or a max length of a hash. But, if you’re selecting a larger region it likely becomes important to have access to both.

    thanks,

    adam

    opened by davidmoten 4
  • GeoHash algorithm not producing expected results

    GeoHash algorithm not producing expected results

    Hi, I've been having some issues with generating Geo Hashes of points and then of bounding boxes that I would expect to share a prefix. E.g. this fairly arbitrary point in New Mexico ( 35.37322998046875, -108.7591552734375) should be contained in a rough bounding box of the United States (13.923403897723347, -126.91406249999999 x 55.37911044801047, -74.1796875 ). Am I doing something wrong?

    thanks

    Unit test below:

    package org.digitalantiquity.skope;
    
    import static org.junit.Assert.assertEquals;
    import static org.junit.Assert.assertTrue;
    
    import org.apache.log4j.Logger;
    import org.digitalantiquity.skope.service.LuceneIndexingService;
    import org.junit.Test;
    
    import com.github.davidmoten.geo.Coverage;
    import com.github.davidmoten.geo.GeoHash;
    import com.github.davidmoten.geo.LatLong;
    
    public class ShapefileParserTest {
    
        private final Logger logger = Logger.getLogger(getClass());
    
    
        @Test
        public void testHash() {
            double x1 = -126.91406249999999;
            double y1 = 13.923403897723347;
            double x2 = -74.1796875;
            double y2 = 55.37911044801047;
    
            logger.debug(String.format("start (%s,%s) x(%s,%s)", x1, y1, x2, y2));
            String point = "9w69jps00000";
            LatLong latLong = GeoHash.decodeHash(point);
            logger.debug(latLong);
            String hash2 = GeoHash.encodeHash(35.37322998046875, -108.7591552734375);
            assertEquals(hash2, point);
            logger.debug(hash2);
            Coverage coverage = GeoHash.coverBoundingBox(x1,y1,x2,y2,4);
            boolean seen = false;
            logger.debug(coverage);
            for (String hash : coverage.getHashes()) {
                if (hash.equals(point) || point.startsWith(hash)) {
                    logger.debug(hash + " -> " + GeoHash.decodeHash(hash));
                    seen = true;
                }
            }
            assertTrue("should have seen hash",seen);
        }
    }
    
    opened by abrin 4
  • Faster coverBoundingBox() by using longs as geohashes internally

    Faster coverBoundingBox() by using longs as geohashes internally

    Hi, I have another patch =)

    By internally representing hashes as longs I could make coverBoundingBox() faster. In my laptop I could speeed that function from 5.8 µs to 2.6 µs (123 % faster). (In an Android app times are significantly higher, of course)

    The long representation I use for geohashes is directly tied to the algorithm and it's incompatible with the representation used in Base32.

    This representation is: Starting from the most significant bits I use each 5-bit group as each geohash character and I use the remaining 4 least significant bits to encode the hash length. That allows for hashes upto 12 characters (because 12 * 5 + 4 = 64). I think this representation is also well suited to dababases, as the ordering is the same as the string values.

    I had to add a CoverageLongs class, but it's not public (it could be at some point, I think).

    Thanks!

    opened by niqueco 4
  • Unexpected coverage with bounding box of the world

    Unexpected coverage with bounding box of the world

    I might be using this incorrectly, but when I do

    double topLeftLat = 90d
    double topLeftLon = -179d
    double bottomRightLat = -90d
    double bottomRightLon = 180d
    Coverage c = GeoHash.coverBoundingBox(topLeftLat, topLeftLon, bottomRightLat, bottomRightLon, 1);
    

    I would have expected c.getHashes() to contain the 32 single-character Geohashes (i.e., 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, b, c, d, e, f, g, h, j, k, m, n, p, q, r, s, t, u, v, w, x, y, z).

    Instead, it contains only 0, 2, 8, b.

    opened by k-mack 3
  • Attempt to get boundary geohashes to line up with PostGIS ST_GeoHash and geohash.org

    Attempt to get boundary geohashes to line up with PostGIS ST_GeoHash and geohash.org

    At lat/lon 0,0 the output of geohash should be s00000000000 to match geohash.org and PostGIS ST_GeoHash function.

    => select ST_GeoHash(ST_SetSRID(ST_MakePoint(0,0),4326),12);
      st_geohash  
     s00000000000
    

    Similar issue was fixed in PostGIS to match geohash.org: Reference: http://trac.osgeo.org/postgis/ticket/2201

    Also: Add asserts for hashes around boundary/pole conditions.

    opened by dhagberg 3
  • Bump spotbugs-maven-plugin from 4.6.0.0 to 4.7.0.0

    Bump spotbugs-maven-plugin from 4.6.0.0 to 4.7.0.0

    Bumps spotbugs-maven-plugin from 4.6.0.0 to 4.7.0.0.

    Release notes

    Sourced from spotbugs-maven-plugin's releases.

    Spotbugs-maven-plugin 4.7.0.0

    • Support spotbugs 4.7.0
    • Fix #68, note still requires use of a separate plugin, see [here}(https://github.com/spotbugs/spotbugs-maven-plugin#eclipse-m2e-integration)
    • Fix #114 by introducing verify mojo to allow split of analysis and verification into lifecycle phases of one's choosing. One use case for that is running multiple code analyzers at once and only failing the build at a later stage, so that all of them have a chance to run. Verify then in this case just uses an existing file.
    • Updated readme with various informatino about groovy, security manager, and m2e specific to this application considerations that are often asked about.
    Commits
    • f08f599 [maven-release-plugin] prepare release spotbugs-maven-plugin-4.7.0.0
    • a45eac3 [pom] Skip pdf plugin as it doesn't work well
    • 8e86861 Merge pull request #434 from hazendaz/spotbugs
    • bc6bad2 Merge pull request #433 from spotbugs/dependabot/maven/org.codehaus.plexus-pl...
    • 02a5e75 [pom] Bump jxr plugin to 3.2.0
    • 0036acf [pom] Bump findsecbugs IT to 1.12.0
    • 623220e [pom] Bump IT compiler to 3.10.1
    • c21d15a [actions] Use jdk 11 with site for now
    • c5bff47 [pom] Update comment in pom
    • e9bf09c Bump plexus-utils from 3.4.1 to 3.4.2
    • Additional commits viewable in compare view

    Dependabot compatibility score

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
    dependencies 
    opened by dependabot[bot] 2
  • Bump spotbugs-maven-plugin from 4.2.3 to 4.6.0.0

    Bump spotbugs-maven-plugin from 4.2.3 to 4.6.0.0

    Bumps spotbugs-maven-plugin from 4.2.3 to 4.6.0.0.

    Release notes

    Sourced from spotbugs-maven-plugin's releases.

    Spotbugs-maven-plugin 4.6.0.0

    • Spotbugs 4.6.0 support
    • Groovy 4.0.1 based

    note on groovy: If using groovy with same group id (already existing condition), an error may occur if not on same version. To alleviate that, make sure groovy artifacts are defined in dependency management in order to force the loaded version correctly on your usage.

    note on 4.6.0.1/4.6.0.2: no change, not released. Issue with site distribution via maven release plugin only that is being tested, use 4.6.0.0 only.

    Spotbugs-maven-plugin 4.5.3.0

    • Support spotbugs maven plugin 4.5.3.0
    • Make maven scoped dependencies provided scope

    Spotbugs-maven-plugin 4.5.2.0

    • Support spotbugs 4.5.2
    • Fix deprecations from spotbugs 4.5.0

    Spotbugs-maven-plugin 4.5.0.0

    support for spotbugs 4.5.0

    Spotbugs-maven-plugin 4.4.2.2

    • Use new base-parent pom with removal of undocumented maven url attributes that cause issues for users of older jfrog artifactory installations.

    Spotbugs-maven-plugin 4.4.2.1 Release

    • Reworked version string to account for any patches we need to make to plugin that would otherwise case a diverge from spotbugs or require us to wait. This is similar to how other plugins approach this such as lombok. The first 3 positions are reserved for the alignment with spotbugs. The last position is for our patch revision level. Normally this would be '0' but given we released 4.4.2 already, it made sense to denote '1' so that it was clear there was a difference.
    • This patch release addresses issues with resolution of the maven dependencies that resulted in a few regression libraries that had vulnerabilities.
    • This patch further changed lowest maven from 3.2.5 to 3.3.9 but reality is that even 3.3.9 likely doesn't work. Since all maven before 3.8.1 are vulnerable, most should be there. If not, let us know. Future releases will raise that revision number up.

    Spotbugs-maven-plugin 4.4.2 Release

    Added support for spotbugs 4.4.2 Now running github actions on jdk 17 and 18-ea to show this works there Now running against maven 3.8.3 Updated a number of plugins and dependencies

    Spotbugs-maven-plugin 4.4.1 Release

    • Add support for Sarif
    • Support spotbugs 4.4.1
    • Library Updates
    Commits
    • 1757c7f [maven-release-plugin] prepare release spotbugs-maven-plugin-4.6.0.0
    • 7e022d7 [pom] Bump remainder to spotbugs 4.6.0
    • aa8a2b1 Merge pull request #413 from spotbugs/dependabot/maven/org.codehaus.mojo-vers...
    • c51b51c Bump versions-maven-plugin from 2.9.0 to 2.10.0
    • fd7e020 Merge pull request #411 from spotbugs/dependabot/maven/mavenVersion-3.8.5
    • 4b591e2 Bump mavenVersion from 3.8.4 to 3.8.5
    • 3276bfa Merge pull request #412 from spotbugs/dependabot/maven/mavenCoreVersion-3.8.5
    • 047836c Bump mavenCoreVersion from 3.8.4 to 3.8.5
    • 4fa6caa Merge pull request #409 from spotbugs/dependabot/maven/com.github.spotbugs-sp...
    • 3d45f8f Merge pull request #410 from spotbugs/dependabot/maven/groovyVersion-4.0.1
    • Additional commits viewable in compare view

    Dependabot compatibility score

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
    dependencies 
    opened by dependabot[bot] 2
  • Feature Request: sub-cells by direction.

    Feature Request: sub-cells by direction.

    First of all, I'd like to say that it's really a pleasure to use this library.

    I am having a use case for which I need to identify "border cells" below a certain distance, e.g. all cells with resolution of about 2.4x2.4 m^2 (geohash lenght = 9) on the border between cells "b" and "c".

    What I would require in this case is, being able to extract all cells of geohash length n+1 (e.g. lenght 2 starting from cells of length 1) by direction, so to have the rightmost column of sub-cells in "b" of geohash length, say, 9, and the leftmost column of subcells of "c" of the same geohash length.

    enhancement 
    opened by ShamblingCrane 0
  • Dependency Issues

    Dependency Issues

    Looks like there is a dependency reference on java.awt in the library. Although this isn't an Android library, it might be handy to clean up this kind of reference.

    Package not included in Android ../../../../../.gradle/caches/modules-2/files-2.1/org.apache.commons/commons-math3/3.2/ec2544ab27e110d2d431bdad7d538ed509b21e62/commons-math3-3.2.jar: Invalid package reference in library; not included in Android: java.awt.geom. Referenced from org.apache.commons.math3.geometry.euclidean.threed.PolyhedronsSet.RotationTransform. ../../../../../.gradle/caches/modules-2/files-2.1/com.github.davidmoten/grumpy-core/0.2.2/48b4cff02a62a3a9ea876dd2486225aefd225706/grumpy-core-0.2.2.jar: Invalid package reference in library; not included in Android: java.awt. Referenced from com.github.davidmoten.grumpy.core.Position.

    enhancement 
    opened by bpappin 5
  • How can I join geohashes?

    How can I join geohashes?

    Hi, and great job.

    I use geohashes for land polygons and I want geohashes that can be merged, to merge them to a bigger one. Is there a way?

    like the Georaptor does (https://github.com/ashwin711/georaptor )

    question 
    opened by Charmatzis 1
  • Geohash.neighbors wrong results

    Geohash.neighbors wrong results

    ..for hash "u".

    result = {ArrayList@3156} size = 8 0 = "g" 1 = "v" 2 = "b" 3 = "s" 4 = "z" 5 = "e" 6 = "c" 7 = "t"

    g u v e s t seem to be OK. I could imagine, that u meets b, c and f over the pole. But what about "z"?

    Regards

    question 0 - Backlog 
    opened by neilyoung 2
  • Get a bounding box by hash lenght

    Get a bounding box by hash lenght

    It will be great to have a BoundingBox object with: area; southwest.latitude; southwest.longitude; northeast.latitude; northeast.longitude etc etc, keep the good work

    enhancement 
    opened by firetrap 0
Releases(0.8.0)
  • 0.8.0(Sep 5, 2021)

    Breaking changes

    • requires Java 1.8+ runtime

    Enhancements

    • Fix wrong coverage around antimeridian (#49), thanks @guilhem-lk!
    • upgrade guava (test dependency), adjust plugins, fix spotbugs and pmd violations
    • remove useless parentheses, remove unused private method
    • update plugins, reporting and fix minor spotbugs violations
    • improve CoverageLong.toString
    • Bump maven-compiler-plugin from 3.5.1 to 3.8.1
    • Bump junit from 4.13.1 to 4.13.2
    • Bump exec-maven-plugin from 1.3.2 to 3.0.0
    • Bump maven-javadoc-plugin from 2.10.3 to 3.3.0
    • Bump grumpy-core from 0.2.2 to 0.4.0
    • Bump commons-io from 2.4 to 2.11.0
    • Bump jmh.version from 1.11.1 to 1.33
    • Bump maven-site-plugin from 3.3 to 3.9.1
    • Bump commons-io from 2.4 to 2.7 in /geo-mem
    • add github actions CI and dependabot
    • add coverage badge
    Source code(tar.gz)
    Source code(zip)
  • 0.7.6(Aug 3, 2017)

  • 0.7.5(Apr 10, 2017)

    • split com.github.davidmoten.geo.mem package into separate dependency (geo-mem) so core artifact (geo) no longer depends on guava. See discussion in #26.
    Source code(tar.gz)
    Source code(zip)
  • 0.7.4(Nov 4, 2015)

  • 0.7.1(Mar 6, 2015)

  • 0.7(Jan 16, 2015)

    • #13 performance improvments GeoHash.encodeHash, GeoHash.hashLengthToCoverBoundingBox, thanks @niqueco
    • #14 performance improvements GeoHash.coverBoundingBox, thanks @niqueco
    • upgrade cobertura plugin to 2.6 to fix mvn site hang
    • remove unnecessary null check in GeoHash.coverBoundingBox to regain 100% code coverage
    Source code(tar.gz)
    Source code(zip)
  • 0.6.10(Dec 10, 2014)

Owner
Dave Moten
Dave Moten
Bloofi: A java implementation of multidimensional Bloom filters

Bloofi: A java implementation of multidimensional Bloom filters Bloom filters are probabilistic data structures commonly used for approximate membersh

Daniel Lemire 71 Nov 2, 2022
A high performance caching library for Java

Caffeine is a high performance, near optimal caching library. For more details, see our user's guide and browse the API docs for the latest release. C

Ben Manes 13k Jan 5, 2023
Chronicle Bytes has a similar purpose to Java NIO's ByteBuffer with many extensions

Chronicle-Bytes Chronicle-Bytes Chronicle Bytes contains all the low level memory access wrappers. It is built on Chronicle Core’s direct memory and O

Chronicle Software : Open Source 334 Jan 1, 2023
High performance Java implementation of a Cuckoo filter - Apache Licensed

Cuckoo Filter For Java This library offers a similar interface to Guava's Bloom filters. In most cases it can be used interchangeably and has addition

Mark Gunlogson 161 Dec 30, 2022
An advanced, but easy to use, platform for writing functional applications in Java 8.

Getting Cyclops X (10) The latest version is cyclops:10.4.0 Stackoverflow tag cyclops-react Documentation (work in progress for Cyclops X) Integration

AOL 1.3k Dec 29, 2022
Eclipse Collections is a collections framework for Java with optimized data structures and a rich, functional and fluent API.

English | 中文 | Deutsch | Español | Ελληνικά | Français | 日本語 | Norsk (bokmål) | Português-Brasil | Русский | हिंदी Eclipse Collections is a comprehens

Eclipse Foundation 2.1k Dec 29, 2022
External-Memory Sorting in Java

Externalsortinginjava External-Memory Sorting in Java: useful to sort very large files using multiple cores and an external-memory algorithm. The vers

Daniel Lemire 235 Dec 29, 2022
A Java library for quickly and efficiently parsing and writing UUIDs

fast-uuid fast-uuid is a Java library for quickly and efficiently parsing and writing UUIDs. It yields the most dramatic performance gains when compar

Jon Chambers 142 Jan 1, 2023
Hollow is a java library and toolset for disseminating in-memory datasets from a single producer to many consumers for high performance read-only access.

Hollow Hollow is a java library and toolset for disseminating in-memory datasets from a single producer to many consumers for high performance read-on

Netflix, Inc. 1.1k Dec 25, 2022
High Performance Primitive Collections for Java

HPPC: High Performance Primitive Collections Collections of primitive types (maps, sets, stacks, lists) with open internals and an API twist (no java.

Carrot Search 890 Dec 28, 2022
Java port of a concurrent trie hash map implementation from the Scala collections library

About This is a Java port of a concurrent trie hash map implementation from the Scala collections library. It is almost a line-by-line conversion from

null 147 Oct 31, 2022
Java library for the HyperLogLog algorithm

java-hll A Java implementation of HyperLogLog whose goal is to be storage-compatible with other similar offerings from Aggregate Knowledge. NOTE: This

Aggregate Knowledge (a Neustar service) 296 Dec 30, 2022
A simple integer compression library in Java

JavaFastPFOR: A simple integer compression library in Java License This code is released under the Apache License Version 2.0 http://www.apache.org/li

Daniel Lemire 487 Dec 30, 2022
Java Collections till the last breadcrumb of memory and performance

Koloboke A family of projects around collections in Java (so far). The Koloboke Collections API A carefully designed extension of the Java Collections

Roman Leventov 967 Nov 14, 2022
Port of LevelDB to Java

LevelDB in Java This is a rewrite (port) of LevelDB in Java. This goal is to have a feature complete implementation that is within 10% of the performa

Dain Sundstrom 1.4k Dec 30, 2022
LMDB for Java

LMDB JNI LMDB JNI provide a Java API to LMDB which is an ultra-fast, ultra-compact key-value embedded data store developed by Symas for the OpenLDAP P

deephacks 201 Apr 6, 2022
Lightning Memory Database (LMDB) for Java: a low latency, transactional, sorted, embedded, key-value store

LMDB for Java LMDB offers: Transactions (full ACID semantics) Ordered keys (enabling very fast cursor-based iteration) Memory-mapped files (enabling o

null 680 Dec 23, 2022
LWJGL is a Java library that enables cross-platform access to popular native APIs useful in the development of graphics (OpenGL, Vulkan), audio (OpenAL), parallel computing (OpenCL, CUDA) and XR (OpenVR, LibOVR) applications.

LWJGL - Lightweight Java Game Library 3 LWJGL (https://www.lwjgl.org) is a Java library that enables cross-platform access to popular native APIs usef

Lightweight Java Game Library 4k Dec 29, 2022