Elasticsearch Java Rest Client.

Related tags

Database Jest
Overview

JEST

Build Status Coverage Status Maven Central

Jest is a Java HTTP Rest client for ElasticSearch.

ElasticSearch is an Open Source (Apache 2), Distributed, RESTful, Search Engine built on top of Apache Lucene.

ElasticSearch already has a Java API which is also used by ElasticSearch internally, but Jest fills a gap, it is the missing client for ElasticSearch Http Rest interface.

Read great introduction to ElasticSearch and Jest from IBM Developer works.

Documentation

For the usual Jest Java library, that you can use as a maven dependency, please refer to the README at jest module.

For the Android port please refer to the README at jest-droid module.

Compatibility

Jest Version Elasticsearch Version
>= 6.0.0 6
>= 5.0.0 5
>= 2.0.0 2
0.1.0 - 1.0.0 1
<= 0.0.6 < 1

Also see changelog for detailed version history.

Support and Contribution

All questions, bug reports and feature requests are handled via the GitHub issue tracker which also acts as the knowledge base. Please see the Contribution Guidelines for more information.

Thanks

We would like to thank the following people for their significant contributions.

Copyright and License

Copyright 2018 www.searchly.com

Licensed under the Apache License, Version 2.0 (the "License"); you may not use this work except in compliance with the License. You may obtain a copy of the License in the LICENSE file, or at:

http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.

Comments
  • Add interface to access explanation

    Add interface to access explanation

    The Native API allows easy access to the explanations (even rendering to html is available), would be great to have that in Jest too. Maybe this could be combined with a way to iterate all the entries like this:

                List<Hit> hits = result.getHits();
                for (Hit hit : hits) {
                    MyType t = hit.getSource(MyType.class);
                    String html = hit.getExplanation().toHtml();
                }
    
    enhancement in progress 
    opened by imod 26
  • Getting

    Getting "I/O read time out" error

    i have an error. I/O read time out. when i trying to make mapping.

    15:33:01.929 [main]  DEBUG org.apache.http.wire[86] - http-outgoing-0 << "[read] I/O error: Read timed out"
    15:33:01.929 [main]  DEBUG org.apache.http.wire[72] - http-outgoing-0 >> "PUT /test_localhost_819954_1421649178439/playlist/1421649181464349000_1421649181 HTTP/1.1[\r][\n]"
    

    i want to know why occured this error... "I/O read timed out"

    somebody teach me ? please thank you.

    full - raw log

    15:33:01.370 [main]  DEBUG org.apache.http.wire[72] - http-outgoing-1 >> "PUT /test_localhost_819954_1421649178439/playlist/_mapping HTTP/1.1[\r][\n]"
    15:33:01.370 [main]  DEBUG org.apache.http.wire[72] - http-outgoing-1 >> "Content-Length: 1637[\r][\n]"
    15:33:01.370 [main]  DEBUG org.apache.http.wire[72] - http-outgoing-1 >> "Content-Type: text/plain; charset=UTF-8[\r][\n]"
    15:33:01.371 [main]  DEBUG org.apache.http.wire[72] - http-outgoing-1 >> "Host: 10.99.199.131:10201[\r][\n]"
    15:33:01.371 [main]  DEBUG org.apache.http.wire[72] - http-outgoing-1 >> "Connection: Keep-Alive[\r][\n]"
    15:33:01.371 [main]  DEBUG org.apache.http.wire[72] - http-outgoing-1 >> "User-Agent: Apache-HttpClient/4.3.3 (java 1.5)[\r][\n]"
    15:33:01.372 [main]  DEBUG org.apache.http.wire[72] - http-outgoing-1 >> "Accept-Encoding: gzip,deflate[\r][\n]"
    15:33:01.372 [main]  DEBUG org.apache.http.wire[72] - http-outgoing-1 >> "[\r][\n]"
    15:33:01.372 [main]  DEBUG org.apache.http.wire[72] - http-outgoing-1 >> "{[\n]"
    15:33:01.372 [main]  DEBUG org.apache.http.wire[72] - http-outgoing-1 >> "  "properties": {[\n]"
    15:33:01.373 [main]  DEBUG org.apache.http.wire[72] - http-outgoing-1 >> "    "id": {[\n]"
    15:33:01.373 [main]  DEBUG org.apache.http.wire[72] - http-outgoing-1 >> "      "type": "string",[\n]"
    15:33:01.373 [main]  DEBUG org.apache.http.wire[72] - http-outgoing-1 >> "      "index": "not_analyzed"[\n]"
    15:33:01.373 [main]  DEBUG org.apache.http.wire[72] - http-outgoing-1 >> "    },[\n]"
    15:33:01.373 [main]  DEBUG org.apache.http.wire[72] - http-outgoing-1 >> "    "trackIds": {[\n]"
    15:33:01.374 [main]  DEBUG org.apache.http.wire[72] - http-outgoing-1 >> "      "type": "string",[\n]"
    15:33:01.374 [main]  DEBUG org.apache.http.wire[72] - http-outgoing-1 >> "      "index": "analyzed",[\n]"
    15:33:01.374 [main]  DEBUG org.apache.http.wire[72] - http-outgoing-1 >> "      "analyzer": "my_whitespace_analyzer"[\n]"
    15:33:01.374 [main]  DEBUG org.apache.http.wire[72] - http-outgoing-1 >> "    },[\n]"
    15:33:01.375 [main]  DEBUG org.apache.http.wire[72] - http-outgoing-1 >> "    "ownerId": {[\n]"
    15:33:01.375 [main]  DEBUG org.apache.http.wire[72] - http-outgoing-1 >> "      "type": "long",[\n]"
    15:33:01.375 [main]  DEBUG org.apache.http.wire[72] - http-outgoing-1 >> "      "index": "not_analyzed"[\n]"
    15:33:01.375 [main]  DEBUG org.apache.http.wire[72] - http-outgoing-1 >> "    },[\n]"
    15:33:01.375 [main]  DEBUG org.apache.http.wire[72] - http-outgoing-1 >> "    "imageInfo": {[\n]"
    15:33:01.376 [main]  DEBUG org.apache.http.wire[72] - http-outgoing-1 >> "      "type": "string",[\n]"
    15:33:01.376 [main]  DEBUG org.apache.http.wire[72] - http-outgoing-1 >> "      "index": "no"[\n]"
    15:33:01.376 [main]  DEBUG org.apache.http.wire[72] - http-outgoing-1 >> "    },[\n]"
    15:33:01.376 [main]  DEBUG org.apache.http.wire[72] - http-outgoing-1 >> "    "modified": {[\n]"
    15:33:01.376 [main]  DEBUG org.apache.http.wire[72] - http-outgoing-1 >> "      "type": "date",[\n]"
    15:33:01.377 [main]  DEBUG org.apache.http.wire[72] - http-outgoing-1 >> "      "index": "not_analyzed",[\n]"
    15:33:01.377 [main]  DEBUG org.apache.http.wire[72] - http-outgoing-1 >> "      "format": "yyyy-MM-dd HH:mm:ss"[\n]"
    15:33:01.377 [main]  DEBUG org.apache.http.wire[72] - http-outgoing-1 >> "    },[\n]"
    15:33:01.377 [main]  DEBUG org.apache.http.wire[72] - http-outgoing-1 >> "    "created": {[\n]"
    15:33:01.378 [main]  DEBUG org.apache.http.wire[72] - http-outgoing-1 >> "      "type": "date",[\n]"
    15:33:01.378 [main]  DEBUG org.apache.http.wire[72] - http-outgoing-1 >> "      "index": "not_analyzed",[\n]"
    15:33:01.378 [main]  DEBUG org.apache.http.wire[72] - http-outgoing-1 >> "      "format": "yyyy-MM-dd HH:mm:ss"[\n]"
    15:33:01.378 [main]  DEBUG org.apache.http.wire[72] - http-outgoing-1 >> "    },[\n]"
    15:33:01.378 [main]  DEBUG org.apache.http.wire[72] - http-outgoing-1 >> "    "privacyOpen": {[\n]"
    15:33:01.379 [main]  DEBUG org.apache.http.wire[72] - http-outgoing-1 >> "      "type": "boolean",[\n]"
    15:33:01.379 [main]  DEBUG org.apache.http.wire[72] - http-outgoing-1 >> "      "index": "not_analyzed"[\n]"
    15:33:01.379 [main]  DEBUG org.apache.http.wire[72] - http-outgoing-1 >> "    },[\n]"
    15:33:01.379 [main]  DEBUG org.apache.http.wire[72] - http-outgoing-1 >> "    "searchAccessTotCount": {[\n]"
    15:33:01.380 [main]  DEBUG org.apache.http.wire[72] - http-outgoing-1 >> "      "type": "long",[\n]"
    15:33:01.380 [main]  DEBUG org.apache.http.wire[72] - http-outgoing-1 >> "      "index": "not_analyzed"[\n]"
    15:33:01.380 [main]  DEBUG org.apache.http.wire[72] - http-outgoing-1 >> "    },[\n]"
    15:33:01.380 [main]  DEBUG org.apache.http.wire[72] - http-outgoing-1 >> "    "searchAccessWeeklyCount": {[\n]"
    15:33:01.381 [main]  DEBUG org.apache.http.wire[72] - http-outgoing-1 >> "      "type": "long",[\n]"
    15:33:01.381 [main]  DEBUG org.apache.http.wire[72] - http-outgoing-1 >> "      "index": "not_analyzed"[\n]"
    15:33:01.381 [main]  DEBUG org.apache.http.wire[72] - http-outgoing-1 >> "    },[\n]"
    15:33:01.381 [main]  DEBUG org.apache.http.wire[72] - http-outgoing-1 >> "    "searchUpdated": {[\n]"
    15:33:01.382 [main]  DEBUG org.apache.http.wire[72] - http-outgoing-1 >> "      "type": "date",[\n]"
    15:33:01.382 [main]  DEBUG org.apache.http.wire[72] - http-outgoing-1 >> "      "index": "not_analyzed",[\n]"
    15:33:01.382 [main]  DEBUG org.apache.http.wire[72] - http-outgoing-1 >> "      "format": "yyyy-MM-dd HH:mm:ss"[\n]"
    15:33:01.382 [main]  DEBUG org.apache.http.wire[72] - http-outgoing-1 >> "    },[\n]"
    15:33:01.383 [main]  DEBUG org.apache.http.wire[72] - http-outgoing-1 >> "    "title": {[\n]"
    15:33:01.383 [main]  DEBUG org.apache.http.wire[72] - http-outgoing-1 >> "      "type": "string",[\n]"
    15:33:01.383 [main]  DEBUG org.apache.http.wire[72] - http-outgoing-1 >> "      "index": "not_analyzed"[\n]"
    15:33:01.383 [main]  DEBUG org.apache.http.wire[72] - http-outgoing-1 >> "    },[\n]"
    15:33:01.383 [main]  DEBUG org.apache.http.wire[72] - http-outgoing-1 >> "    "trackCount": {[\n]"
    15:33:01.384 [main]  DEBUG org.apache.http.wire[72] - http-outgoing-1 >> "      "type": "integer",[\n]"
    15:33:01.384 [main]  DEBUG org.apache.http.wire[72] - http-outgoing-1 >> "      "index": "not_analyzed"[\n]"
    15:33:01.384 [main]  DEBUG org.apache.http.wire[72] - http-outgoing-1 >> "    },[\n]"
    15:33:01.384 [main]  DEBUG org.apache.http.wire[72] - http-outgoing-1 >> "    "typeCode": {[\n]"
    15:33:01.384 [main]  DEBUG org.apache.http.wire[72] - http-outgoing-1 >> "      "type": "string",[\n]"
    15:33:01.385 [main]  DEBUG org.apache.http.wire[72] - http-outgoing-1 >> "      "index": "not_analyzed"[\n]"
    15:33:01.385 [main]  DEBUG org.apache.http.wire[72] - http-outgoing-1 >> "    },[\n]"
    15:33:01.385 [main]  DEBUG org.apache.http.wire[72] - http-outgoing-1 >> "    "externalId": {[\n]"
    15:33:01.385 [main]  DEBUG org.apache.http.wire[72] - http-outgoing-1 >> "      "type": "string",[\n]"
    15:33:01.386 [main]  DEBUG org.apache.http.wire[72] - http-outgoing-1 >> "      "index": "not_analyzed"[\n]"
    15:33:01.386 [main]  DEBUG org.apache.http.wire[72] - http-outgoing-1 >> "    },[\n]"
    15:33:01.386 [main]  DEBUG org.apache.http.wire[72] - http-outgoing-1 >> "    "playlistFindEntityId": {[\n]"
    15:33:01.386 [main]  DEBUG org.apache.http.wire[72] - http-outgoing-1 >> "      "type": "string",[\n]"
    15:33:01.386 [main]  DEBUG org.apache.http.wire[72] - http-outgoing-1 >> "      "index": "not_analyzed"[\n]"
    15:33:01.387 [main]  DEBUG org.apache.http.wire[72] - http-outgoing-1 >> "    },[\n]"
    15:33:01.387 [main]  DEBUG org.apache.http.wire[72] - http-outgoing-1 >> "    "subscriberCount": {[\n]"
    15:33:01.387 [main]  DEBUG org.apache.http.wire[72] - http-outgoing-1 >> "      "type": "integer",[\n]"
    15:33:01.387 [main]  DEBUG org.apache.http.wire[72] - http-outgoing-1 >> "      "index": "not_analyzed"[\n]"
    15:33:01.388 [main]  DEBUG org.apache.http.wire[72] - http-outgoing-1 >> "    },[\n]"
    15:33:01.388 [main]  DEBUG org.apache.http.wire[72] - http-outgoing-1 >> "    "indexTitle": {[\n]"
    15:33:01.388 [main]  DEBUG org.apache.http.wire[72] - http-outgoing-1 >> "      "type": "string",[\n]"
    15:33:01.388 [main]  DEBUG org.apache.http.wire[72] - http-outgoing-1 >> "      "index": "analyzed",[\n]"
    15:33:01.388 [main]  DEBUG org.apache.http.wire[72] - http-outgoing-1 >> "      "analyzer": "my_ngram_analyzer"[\n]"
    15:33:01.389 [main]  DEBUG org.apache.http.wire[72] - http-outgoing-1 >> "    },[\n]"
    15:33:01.389 [main]  DEBUG org.apache.http.wire[72] - http-outgoing-1 >> "    "_all": {[\n]"
    15:33:01.389 [main]  DEBUG org.apache.http.wire[72] - http-outgoing-1 >> "      "enabled": "false"[\n]"
    15:33:01.389 [main]  DEBUG org.apache.http.wire[72] - http-outgoing-1 >> "    }[\n]"
    15:33:01.389 [main]  DEBUG org.apache.http.wire[72] - http-outgoing-1 >> "  }[\n]"
    15:33:01.390 [main]  DEBUG org.apache.http.wire[72] - http-outgoing-1 >> "}[\n]"
    15:33:01.428 [main]  DEBUG org.apache.http.wire[72] - http-outgoing-1 << "HTTP/1.1 200 OK[\r][\n]"
    15:33:01.428 [main]  DEBUG org.apache.http.wire[72] - http-outgoing-1 << "Content-Type: application/json; charset=UTF-8[\r][\n]"
    15:33:01.428 [main]  DEBUG org.apache.http.wire[72] - http-outgoing-1 << "Content-Length: 21[\r][\n]"
    15:33:01.429 [main]  DEBUG org.apache.http.wire[72] - http-outgoing-1 << "[\r][\n]"
    15:33:01.429 [main]  DEBUG org.apache.http.wire[86] - http-outgoing-1 << "{"acknowledged":true}"
    15:33:01.429 [main]  INFO  SEARCH_LOG[115] - response data - 
     {
        "acknowledged":true
    }
    15:33:01.923 [main]  INFO  SEARCH_LOG[109] -  Uri - PUT [Ljava.lang.Object;@4c4b63b9[
      {http://10.99.199.131:10200,http://10.99.199.131:10201}
    ] / test_localhost_819954_1421649178439/playlist/1421649181464349000_1421649181
    15:33:01.929 [main]  DEBUG org.apache.http.wire[86] - http-outgoing-0 << "[read] I/O error: Read timed out"
    15:33:01.929 [main]  DEBUG org.apache.http.wire[72] - http-outgoing-0 >> "PUT /test_localhost_819954_1421649178439/playlist/1421649181464349000_1421649181 HTTP/1.1[\r][\n]"
    15:33:01.930 [main]  DEBUG org.apache.http.wire[72] - http-outgoing-0 >> "Content-Length: 596[\r][\n]"
    15:33:01.930 [main]  DEBUG org.apache.http.wire[72] - http-outgoing-0 >> "Content-Type: text/plain; charset=UTF-8[\r][\n]"
    15:33:01.930 [main]  DEBUG org.apache.http.wire[72] - http-outgoing-0 >> "Host: 10.99.199.131:10200[\r][\n]"
    15:33:01.930 [main]  DEBUG org.apache.http.wire[72] - http-outgoing-0 >> "Connection: Keep-Alive[\r][\n]"
    15:33:01.930 [main]  DEBUG org.apache.http.wire[72] - http-outgoing-0 >> "User-Agent: Apache-HttpClient/4.3.3 (java 1.5)[\r][\n]"
    15:33:01.931 [main]  DEBUG org.apache.http.wire[72] - http-outgoing-0 >> "Accept-Encoding: gzip,deflate[\r][\n]"
    15:33:01.931 [main]  DEBUG org.apache.http.wire[72] - http-outgoing-0 >> "[\r][\n]"
    15:33:01.931 [main]  DEBUG org.apache.http.wire[86] - http-outgoing-0 >> "{"id":"1421649181464349000_1421649181","ownerId":1421649181485668000,"privacyOpen":false,"searchAccessTotCount":0,"searchAccessWeeklyCount":0,"searchUpdated":"2015-01-19 15:33:01","title":"playlisttitle1421649181471164000","trackCount":10,"typeCode":"N","playlistFindEntityId":"cu13bab665f9ad96a0","subscriberCount":18,"imageInfo":"{\"thumbnailId\":\"lc145_3450.13122018\",\"sourceImageUrl\":\"lc145_3450.13122018\"}","created":"2015-01-19 15:33:01","modified":"2015-01-19 15:33:01","indexTitle":"playlisttitle1421649181471164000","externalId":"up1421649181464349000_1421649181","trackIds":"123"}"
    15:33:01.943 [main]  DEBUG org.apache.http.wire[72] - http-outgoing-0 << "HTTP/1.1 201 Created[\r][\n]"
    15:33:01.943 [main]  DEBUG org.apache.http.wire[72] - http-outgoing-0 << "Content-Type: application/json; charset=UTF-8[\r][\n]"
    15:33:01.943 [main]  DEBUG org.apache.http.wire[72] - http-outgoing-0 << "Content-Length: 134[\r][\n]"
    15:33:01.944 [main]  DEBUG org.apache.http.wire[72] - http-outgoing-0 << "[\r][\n]"
    15:33:01.944 [main]  DEBUG org.apache.http.wire[86] - http-outgoing-0 << "{"_index":"test_localhost_819954_1421649178439","_type":"playlist","_id":"1421649181464349000_1421649181","_version":1,"created":true}"
    15:33:01.945 [main]  INFO  SEARCH_LOG[115] - response data - 
     {
        "_index":"test_localhost_819954_1421649178439",
        "_type":"playlist",
        "_id":"1421649181464349000_1421649181",
        "_version":1,
        "created":true
    }
    
    opened by happyprg 23
  • Add support for AWS Signature v4 request signing

    Add support for AWS Signature v4 request signing

    AWS' ElasticSearch clusters do not support the "typical" transport (port 9300) protocol, instead supporting only the REST (port 9200) protocol. Also, access control isn't very flexible, allowing only IP-based or signature-based approaches.

    This makes Jest a lifesaver! It's a great way to work with ElasticSearch's REST API in Java. However, because it does not support signature-based approaches, it's only possible to use Jest with IP-based authentication. This doesn't work for all cloud deployment scenarios, such as those using AWS spot instances.

    This pull request adds support for signature-based authentication to Jest. I've added it to the v1.0.2 tag because AWS' ElasticSearch offering is currently at v1.5, so adding to master wouldn't be useful. (I'd be happy to issue another pull request against master if you'd like, though, although it couldn't be used in production until AWS bumps the ElasticSearch version to 2.0.)

    I'm currently using the changes in a prototype at work in addition to the tests cases provided in this pull request, which come from Amazon documentation, so the code seems solid. Please shoot any questions my way!

    And thank you for building and maintaining Jest! It's a pleasure to use. :)

    enhancement 
    opened by sigpwned 22
  • BasicClientConnManager hatası

    BasicClientConnManager hatası

    Selam, öncelikle Elastic Search için başarılı bir Java rest client geliştirdiğiniz için teşekkür ederim. Sanırım Java için bu alanda geliştirilen ilk kütüphane sizinkisi. Çok uzatmadan issue'yu açıklayayım. Problem kaynağının kullanmakta olduğumuz httpclient-4.2.2.jar kütüphane versiyonumu yoksa kütüphane kodu içindeki bir hata mı olduğundan emin olamadık. Almakta olduğumz exception aşağıdadır:

    java.lang.IllegalStateException: Invalid use of BasicClientConnManager: connection still allocated. Make sure to release the connection before allocating another one. at org.apache.http.impl.conn.BasicClientConnectionManager.getConnection(BasicClientConnectionManager.java:162) at org.apache.http.impl.conn.BasicClientConnectionManager$1.getConnection(BasicClientConnectionManager.java:139) at org.apache.http.impl.client.DefaultRequestDirector.execute(DefaultRequestDirector.java:455) at org.apache.http.impl.client.AbstractHttpClient.execute(AbstractHttpClient.java:906) at org.apache.http.impl.client.AbstractHttpClient.execute(AbstractHttpClient.java:805) at org.apache.http.impl.client.AbstractHttpClient.execute(AbstractHttpClient.java:784) at io.searchbox.client.http.JestHttpClient.execute(JestHttpClient.java:55) at com.test1.elasticsearch.Search(ElasticSearch.java:114) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)

    Not: Concurrent şekilde yoğun search işlemi gerçekleştirdiğimizde bu hatayı sıkça almaktayız.

    Teşekkür ederim. Başarılarınızın devamını dilerim.

    opened by tunaakn 19
  • Compatibility with 1.0

    Compatibility with 1.0

    I tried to upgrade to Elasticsearch 1.0 and I noticed that the Rest API is changed, so nothing works anymore. Is anybody working on this at the moment?

    enhancement 
    opened by dariodariodario 15
  • How to use custom Gson instance's serializer/deserializer for query building?

    How to use custom Gson instance's serializer/deserializer for query building?

    I have the following serializer and deserializer for java.util.Dates:

    static JsonSerializer<Date> ser = new JsonSerializer<Date>() {
        @Override
        public JsonElement serialize(Date src, Type typeOfSrc,
                JsonSerializationContext context) {
            return src == null ? null : new JsonPrimitive(src.getTime());
        }
    };
    static JsonDeserializer<Date> deser = new JsonDeserializer<Date>() {
        @Override
        public Date deserialize(JsonElement json, Type typeOfT,
                JsonDeserializationContext context) throws JsonParseException {
            return json == null ? null : new Date(json.getAsLong());
        }
    };
    

    I then associate it with my Jest client as such:

    Gson gson = new GsonBuilder()
    .registerTypeAdapter(Date.class, ser)
    .registerTypeAdapter(Date.class, deser).create();
    ClientConfig clientConfig = new Builder(serverURIs).gson(gson).multiThreaded(true).build();
    

    Indexing happens correctly (Dates are stored as milliseconds since epoch). Search queries do not produce milliseconds using my serializer/deserializer. I have the following serach query, where 'from' and 'to' are java.util.Dates:

    ssb.query(QueryBuilders.rangeQuery("timestamp").from(from).to(to));
    

    It generates the following query:

    "range" : {
      "timestamp" : {
        "from" : "2013-07-01T22:46:23.286Z",
        "to" : "2013-08-27T22:46:23.286Z",
        "include_lower" : true,
        "include_upper" : true
      }
    }
    

    I expected the query to produce from' and 'to' into milliseconds, as shown below.

    "from" : 1372718783286,
    "to" : 1377643583286,
    

    Am I doing this incorrectly, or is this feature not implemented?

    I also saw this thread: http://stackoverflow.com/questions/7910734/gsonbuilder-setdateformat-for-2011-10-26t202959-0700 but that answer requires upgrading to Java 7, which will probably not be an option for me.

    opened by churro-s 14
  • Adds ability to filter nodes if discovery is on

    Adds ability to filter nodes if discovery is on

    This closes #303

    This added functionality will allow building an HttpClientConfig with an optional filter criteria on nodes info (Previously turning discovery on would seek out all nodes with http endpoints on)

    Using the syntax outlined in the manual for Nodes Info, you can specify which type of nodes to autodiscover based on attributes of each node.

    Note that filtering by master:true etc. doesn't work for the Elasticsearch Nodes Info API, but you can use any of the same syntax as in the example, or specify your own custom attributes to filter by, adding values to node.<some attribute name>: <some attribute value> in the elasticsearch configuration

    opened by bdharrington7 13
  • idle connection reaper

    idle connection reaper

    This adds a feature to close idle connections in the connection pool, only if configured. I'd like to add tests for this, but I must admit that even without making any changes, I can't get the tests to pass. If you have any tips on environment settings to get the tests to pass (mvn clean verify), then I'd love to add some tests as well.

    The Apache HttpClient requires a separate thread to close idle connections periodically, which is a bit of a bummer. This pull request uses a similar design as the NodeChecker to periodically close any idle connections lying around in the connection pool.

    My intent is to make this an opt-in behavior (i.e. no change to existing configurations), and to work for both the HTTP and Droid clients. If I could run the tests, I'd feel more confident that I succeeded.

    Without this ability, it is possible for connections lying around in the connection pool to have actually been terminated by the server but the client app won't know this until it attempts to make it's first query, which will result in a SocketTimeoutException.

    enhancement 
    opened by matthewbogner 12
  • NullPointerException when turning on autodiscovery

    NullPointerException when turning on autodiscovery

    I set autodiscovery on just to give a try and the JestClientFactory dies with a NullPointerException on getObject. The line that fails is JestClientFactory@66 (startAndWait for NodeChecker). Not sure why... I'd like even some more docs about node discovery.

    opened by dariodariodario 12
  • added ClearScroll action

    added ClearScroll action

    see https://www.elastic.co/guide/en/elasticsearch/reference/current/search-request-scroll.html#_clear_scroll_api

    I'm working on jest support for spring-data-elasticsearch. The jest API misses this elasticsearch action, see https://github.com/spring-projects/spring-data-elasticsearch/blob/master/src/main/java/org/springframework/data/elasticsearch/core/ElasticsearchOperations.java#L562

    enhancement 
    opened by pulse00 11
  • Get server concurrency issue

    Get server concurrency issue

    Fix for #311. Includes a simple implementation for an immutable circular list. Although just synchronising the access method would work too, it would produce additional overhead.

    opened by alkiskal 10
  • CatResult.parseResultArray throws java.lang.UnsupportedOperationException: JsonNull

    CatResult.parseResultArray throws java.lang.UnsupportedOperationException: JsonNull

    java.lang.UnsupportedOperationException: JsonNull
    at com.google.gson.JsonElement.getAsString(JsonElement.java:179)
    at io.searchbox.core.CatResult.parseResultArray(CatResult.java:53)
    at io.searchbox.core.CatResult.getPlainText(CatResult.java:35)

    opened by AN34 0
  • io.searchbox.client.http.JestHttpClient#deserializeResponse cause oom exception

    io.searchbox.client.http.JestHttpClient#deserializeResponse cause oom exception

    io.searchbox.client.http.JestHttpClient#deserializeResponse cause oom exception

    Hi,Recently, when using jestClient, an OOM exception occurred;The positioning problem is as follows

    at org.apache.http.util.CharArrayBuffer.expand(CharArrayBuffer.java:60)
    at org.apache.http.util.CharArrayBuffer.append(CharArrayBuffer.java:90)
    at org.apache.http.util.EntityUtils.toString(EntityUtils.java:248)
    at org.apache.http.util.EntityUtils.toString(EntityUtils.java:291)
    at io.searchbox.client.http.JestHttpClient.deserializeResponse(JestHttpClient.java:198)
    

    Because the size of the stream is not known in advance, there are many times of expand when executing toString. Each time expand will apply for a memory space that is twice as large in the heap. When there is a lot of returned information, it is easy to cause oom

    org.apache.http.util.EntityUtils#toString(org.apache.http.HttpEntity, java.nio.charset.Charset)
    
    public static String toString(
                final HttpEntity entity, final Charset defaultCharset) throws IOException, ParseException {
            Args.notNull(entity, "Entity");
            final InputStream instream = entity.getContent();
            if (instream == null) {
                return null;
            }
            try {
                Args.check(entity.getContentLength() <= Integer.MAX_VALUE,
                        "HTTP entity too large to be buffered in memory");
                int i = (int)entity.getContentLength();
                // DecompressingEntity.getContentLength() = -1;
                if (i < 0) {
                    i = 4096;
                }
                Charset charset = null;
                try {
                    final ContentType contentType = ContentType.get(entity);
                    if (contentType != null) {
                        charset = contentType.getCharset();
                    }
                } catch (final UnsupportedCharsetException ex) {
                    if (defaultCharset == null) {
                        throw new UnsupportedEncodingException(ex.getMessage());
                    }
                }
                if (charset == null) {
                    charset = defaultCharset;
                }
                if (charset == null) {
                    charset = HTTP.DEF_CONTENT_CHARSET;
                }
                final Reader reader = new InputStreamReader(instream, charset);
                // Insufficient capacity
                final CharArrayBuffer buffer = new CharArrayBuffer(i);
                final char[] tmp = new char[1024];
                int l;
                while((l = reader.read(tmp)) != -1) {
                    // Internally expanded multiple times
                    buffer.append(tmp, 0, l);
                }
                return buffer.toString();
            } finally {
                instream.close();
            }
        }
    
    opened by xbcrh 0
  • How to add parameters on  '/_search' method  like this '/_search?preference=zxcvzxd123'

    How to add parameters on '/_search' method like this '/_search?preference=zxcvzxd123'

    Say the same user runs the same request twice in a row and documents do not come back in the same order both times, this is a pretty bad experience isn’t it? Unfortunately this is something that can happen if you have replicas (index.number_of_replicas is greater than 0). The reason is that Elasticsearch selects the shards that the query should go to in a round-robin fashion, so it is quite likely if you run the same query twice in a row that it will go to different copies of the same shard. The recommended way to work around this issue is to use a string that identifies the user that is logged in (a user id or session id for instance) as a preference. This ensures that all queries of a given user are always going to hit the same shards, so scores remain more consistent across queries.

    How to add parameters on '/_search' method like this '/_search?preference=zxcvzxd123'

    opened by dap3ng 0
Owner
Searchly
Searchly
EBQuery allows you to easily access databases through a REST API.

EBQuery Table of Contents Introduction - Enterprise Backend as a Service Requirements Getting started Using EBQuery Features Introduction - Enterprise

null 15 Nov 9, 2021
A blazingly small and sane redis java client

Jedis Jedis is a blazingly small and sane Redis java client. Jedis was conceived to be EASY to use. Jedis is fully compatible with redis 2.8.x, 3.x.x

Redis 10.8k Dec 31, 2022
A blazingly small and sane redis java client

Jedis Jedis is a blazingly small and sane Redis java client. Jedis was conceived to be EASY to use. Jedis is fully compatible with redis 2.8.x, 3.x.x

Redis 10.9k Jan 8, 2023
Free universal database tool and SQL client

DBeaver Free multi-platform database tool for developers, SQL programmers, database administrators and analysts. Supports any database which has JDBC

DBeaver 29.8k Jan 1, 2023
Bad client - the sequel

delta bad skid made by a dumbass pt.3 check discord server, you should join. dev configs in there ?? check 711.club out for actual good shit hacks aut

noat 16 Dec 2, 2022
🚀flink-sql-submit is a custom SQL submission client

??flink-sql-submit is a custom SQL submission client This is a customizable extension of the client, unlike flink's official default client.

ccinn 3 Mar 28, 2022
H2 is an embeddable RDBMS written in Java.

Welcome to H2, the Java SQL database. The main features of H2 are: Very fast, open source, JDBC API Embedded and server modes; disk-based or in-memory

H2 Database Engine 3.6k Jan 5, 2023
Java binding for etcd

jetcd: Java binding for etcd TravisCI: CircleCI: A simple Java client library for the awesome etcd Uses the Apache HttpAsyncClient to implement watche

Justin Santa Barbara 134 Jan 26, 2022
LINQ-style queries for Java 8

JINQ: Easy Database Queries for Java 8 Jinq provides developers an easy and natural way to write database queries in Java. You can treat database data

Ming Iu 641 Dec 28, 2022
MapDB provides concurrent Maps, Sets and Queues backed by disk storage or off-heap-memory. It is a fast and easy to use embedded Java database engine.

MapDB: database engine MapDB combines embedded database engine and Java collections. It is free under Apache 2 license. MapDB is flexible and can be u

Jan Kotek 4.6k Dec 30, 2022
MariaDB Embedded in Java JAR

What? MariaDB4j is a Java (!) "launcher" for MariaDB (the "backward compatible, drop-in replacement of the MySQL(R) Database Server", see FAQ and Wiki

Michael Vorburger ⛑️ 720 Jan 4, 2023
Unified Queries for Java

Querydsl Querydsl is a framework which enables the construction of type-safe SQL-like queries for multiple backends including JPA, MongoDB and SQL in

Querydsl 4.1k Dec 31, 2022
requery - modern SQL based query & persistence for Java / Kotlin / Android

A light but powerful object mapping and SQL generator for Java/Kotlin/Android with RxJava and Java 8 support. Easily map to or create databases, perfo

requery 3.1k Jan 5, 2023
Speedment is a Stream ORM Java Toolkit and Runtime

Java Stream ORM Speedment is an open source Java Stream ORM toolkit and runtime. The toolkit analyzes the metadata of an existing SQL database and aut

Speedment 2k Dec 21, 2022
jOOQ is the best way to write SQL in Java

jOOQ's reason for being - compared to JPA Java and SQL have come a long way. SQL is an "ancient", yet established and well-understood technology. Java

jOOQ Object Oriented Querying 5.3k Jan 4, 2023
MapDB provides concurrent Maps, Sets and Queues backed by disk storage or off-heap-memory. It is a fast and easy to use embedded Java database engine.

MapDB: database engine MapDB combines embedded database engine and Java collections. It is free under Apache 2 license. MapDB is flexible and can be u

Jan Kotek 4.6k Jan 1, 2023
jdbi is designed to provide convenient tabular data access in Java; including templated SQL, parameterized and strongly typed queries, and Streams integration

The Jdbi library provides convenient, idiomatic access to relational databases in Java. Jdbi is built on top of JDBC. If your database has a JDBC driv

null 1.7k Dec 27, 2022
Event capture and querying framework for Java

Eventsourcing for Java Enabling plurality and evolution of domain models Instead of mutating data in a database, Eventsourcing stores all changes (eve

Eventsourcing, Inc. 408 Nov 5, 2022