A scalable, distributed Time Series Database.

Related tags

Database opentsdb
Overview
       ___                 _____ ____  ____  ____
      / _ \ _ __   ___ _ _|_   _/ ___||  _ \| __ )
     | | | | '_ \ / _ \ '_ \| | \___ \| | | |  _ \
     | |_| | |_) |  __/ | | | |  ___) | |_| | |_) |
      \___/| .__/ \___|_| |_|_| |____/|____/|____/
           |_|    The modern time series database.

OpenTSDB is a distributed, scalable Time Series Database (TSDB) written on
top of HBase.  OpenTSDB was written to address a common need: store, index
and serve metrics collected from computer systems (network gear, operating
systems, applications) at a large scale, and make this data easily accessible
and graphable.

Thanks to HBase's scalability, OpenTSDB allows you to collect thousands of
metrics from tens of thousands of hosts and applications, at a high rate
(every few seconds). OpenTSDB will never delete or downsample data and can
easily store hundreds of billions of data points.

OpenTSDB is free software and is available under both LGPLv2.1+ and GPLv3+.
Find more about OpenTSDB at http://opentsdb.net
Comments
  • OpenTSDB leaking Sockets?

    OpenTSDB leaking Sockets?

    I've hit too many open files, digging into this:

    [root@ny-devtsdb04 fd]# ps aux | grep opentsdb.conf
    root      2076  3.8  1.5 5489036 260496 pts/0  Sl   Mar21  61:23 java -enableassertions -enablesystemassertions -classpath /usr/local/share/opentsdb/asynchbase-1.4.1.jar:/usr/local/share/opentsdb/guava-13.0.1.jar:/usr/local/share/opentsdb/jackson-annotations-2.2.3.jar:/usr/local/share/opentsdb/jackson-core-2.2.3.jar:/usr/local/share/opentsdb/jackson-databind-2.2.3.jar:/usr/local/share/opentsdb/log4j-over-slf4j-1.7.2.jar:/usr/local/share/opentsdb/logback-classic-1.0.9.jar:/usr/local/share/opentsdb/logback-core-1.0.9.jar:/usr/local/share/opentsdb/netty-3.6.2.Final.jar:/usr/local/share/opentsdb/slf4j-api-1.7.2.jar:/usr/local/share/opentsdb/suasync-1.4.0.jar:/usr/local/share/opentsdb/tsdb-2.0.0.jar:/usr/local/share/opentsdb/zookeeper-3.3.6.jar:/usr/local/share/opentsdb net.opentsdb.tools.TSDMain --port=4242 --staticroot=build/staticroot --cachedir=/tmp/tsd --zkquorum=ny-devtsdb01,ny-devtsdb03,ny-devtsdb04 --config /root/dev/opentsdb.conf
    root     23362  0.0  0.0 103256   872 pts/0    S+   17:32   0:00 grep opentsdb.conf
    [root@ny-devtsdb04 fd]# ls -l /proc/2076/fd | grep socket | wc -l
    4049
    [root@ny-devtsdb04 fd]# netstat -ap | grep 2076 | wc -l
    6
    [root@ny-devtsdb04 fd]# lsof -p 2076 | grep sock | head
    java    2076 root   17u  sock                0,6      0t0 13372319 can't identify protocol
    java    2076 root   49u  sock                0,6      0t0 13394689 can't identify protocol
    java    2076 root   50u  sock                0,6      0t0 13394637 can't identify protocol
    java    2076 root   51u  sock                0,6      0t0 13395575 can't identify protocol
    java    2076 root   52u  unix 0xffff880437c5a0c0      0t0 12977998 socket
    java    2076 root   53u  sock                0,6      0t0 12977996 can't identify protocol
    java    2076 root   54u  sock                0,6      0t0 13394738 can't identify protocol
    java    2076 root   56u  sock                0,6      0t0 13395172 can't identify protocol
    java    2076 root   57u  sock                0,6      0t0 13394785 can't identify protocol
    java    2076 root   58u  sock                0,6      0t0 13376378 can't identify protocol
    

    I was doing some maintenance the other day on my hbase cluster and restarting region servers, so maybe when stuff "disappears" in the cluster they don't get closed properly?

    (Still need to update async base, maybe that is part of it..?)

    bug 
    opened by kylebrandt 33
  • Align the start of downsampling

    Align the start of downsampling

    Aligned the start time of downsampling by the given downsampling interval so that the same set of data points belongs to the same downsampling period regardless of the start time of a user-requested time range.

    • Made the start time of a downsampling period its representative timestamp without computing the average of the timestamps of the data points of the period.
    • Factored downsampling code out from Span.java to its own class to make it easier to write unit tests.
    • Made the type of downsampling result double regardless of the original type to get rid of the loop that checked the type of the raw input data points. By using double, we could handle very small integers and very big integers within a reasonable error margin. Some values are too small to round off. For example, the integer average of four values (2, 2, 2, 1) is 1, which is far from the real average of 1.75. Some values are so big that it could cause long-integer overflow while downsampling.
    feature request hard 
    opened by jesse5e 27
  • Can't build opentsdb 2.2.0

    Can't build opentsdb 2.2.0

    Grabbed source tarball from https://github.com/OpenTSDB/opentsdb/releases/download/v2.2.0/opentsdb-2.2.0.tar.gz and extracted, and went to that dir.

    There's lots of output when I try and build opentsdb, but I think the most relevant error is: No rule to make target 'net/opentsdb/core/AggregationIterator.class', needed by 'tsdb-2.2.0.jar'. Stop.

    Full output

    bash-4.3$ ./build.sh 
    + test -f configure
    + test -d build
    + mkdir build
    + cd build
    + test -f Makefile
    + ../configure
    checking for a BSD-compatible install... /usr/bin/install -c
    checking whether build environment is sane... yes
    checking for a thread-safe mkdir -p... ../build-aux/install-sh -c -d
    checking for gawk... no
    checking for mawk... no
    checking for nawk... no
    checking for awk... awk
    checking whether make sets $(MAKE)... yes
    checking whether make supports nested variables... yes
    checking for md5sum... no
    checking for md5... /sbin/md5
    checking for java... /Library/Java/JavaVirtualMachines/jdk1.8.0_72.jdk/Contents/Home/bin/java
    checking for javac... /Library/Java/JavaVirtualMachines/jdk1.8.0_72.jdk/Contents/Home/bin/javac
    checking for jar... /Library/Java/JavaVirtualMachines/jdk1.8.0_72.jdk/Contents/Home/bin/jar
    checking for true... /usr/bin/true
    checking for javadoc... /Library/Java/JavaVirtualMachines/jdk1.8.0_72.jdk/Contents/Home/bin/javadoc
    checking for wget... no
    checking for curl... /usr/bin/curl
    checking that generated files are newer than configure... done
    configure: creating ./config.status
    config.status: creating Makefile
    config.status: creating opentsdb.spec
    config.status: creating build-aux/fetchdep.sh
    + MAKE=make
    ++ uname -s
    + '[' Darwin = FreeBSD ']'
    + exec make
    ../build-aux/gen_build_data.sh src/tools/BuildData.java net.opentsdb.tools 2.2.0
    Generating src/tools/BuildData.java
    fatal: Not a git repository (or any of the parent directories): .git
    fatal: Not a git repository (or any of the parent directories): .git
    /Applications/Xcode.app/Contents/Developer/usr/bin/make  all-am
    rm -f tsdb tsdb.tmp
    script=tsdb; pkgdatadir=''; \
              abs_srcdir='/private/tmp/opentsdb20160224-99228-12y38mp/opentsdb-2.2.0/build/..'; abs_builddir='/private/tmp/opentsdb20160224-99228-12y38mp/opentsdb-2.2.0/build'; \
              srcdir=''; test -f ./$script.in || srcdir=../; sed -e "s:@pkgdatadir[@]:$pkgdatadir:g" -e "s:@abs_srcdir[@]:$abs_srcdir:g" -e "s:@abs_builddir[@]:$abs_builddir:g" -e "s:@configdir[@]:$configdir:g" ${srcdir}$script.in >$script.tmp
    make[1]: *** No rule to make target `net/opentsdb/core/AggregationIterator.class', needed by `tsdb-2.2.0.jar'.  Stop.
    make[1]: *** Waiting for unfinished jobs....
    chmod +x tsdb.tmp
    chmod a-w tsdb.tmp
    mv tsdb.tmp tsdb
    /Library/Java/JavaVirtualMachines/jdk1.8.0_72.jdk/Contents/Home/bin/javac -Xlint -source 6 -encoding utf-8 -d . -cp ../third_party/hbase/asynchbase-1.7.1.jar:../third_party/guava/guava-18.0.jar:../third_party/slf4j/log4j-over-slf4j-1.7.7.jar:../third_party/logback/logback-classic-1.0.13.jar:../third_party/logback/logback-core-1.0.13.jar:../third_party/jackson/jackson-annotations-2.4.3.jar:../third_party/jackson/jackson-core-2.4.3.jar:../third_party/jackson/jackson-databind-2.4.3.jar:../third_party/netty/netty-3.9.4.Final.jar:../third_party/protobuf/protobuf-java-2.5.0.jar:../third_party/slf4j/slf4j-api-1.7.7.jar:../third_party/suasync/async-1.4.0.jar:../third_party/zookeeper/zookeeper-3.4.5.jar:../third_party/apache/commons-math3-3.4.1.jar:  ../src/core/AggregationIterator.java ../src/core/Aggregator.java ../src/core/Aggregators.java ../src/core/AppendDataPoints.java ../src/core/BatchedDataPoints.java ../src/core/ByteBufferList.java ../src/core/ColumnDatapointIterator.java ../src/core/CompactionQueue.java ../src/core/Const.java ../src/core/DataPoint.java ../src/core/DataPoints.java ../src/core/DataPointsIterator.java ../src/core/Downsampler.java ../src/core/DownsamplingSpecification.java ../src/core/FillingDownsampler.java ../src/core/FillPolicy.java ../src/core/IncomingDataPoint.java ../src/core/IncomingDataPoints.java ../src/core/IllegalDataException.java ../src/core/Internal.java ../src/core/MutableDataPoint.java ../src/core/Query.java ../src/core/QueryException.java ../src/core/RateOptions.java ../src/core/RateSpan.java ../src/core/RowKey.java ../src/core/RowSeq.java ../src/core/SaltScanner.java ../src/core/SeekableView.java ../src/core/Span.java ../src/core/SpanGroup.java ../src/core/TSDB.java ../src/core/Tags.java ../src/core/TsdbQuery.java ../src/core/TSQuery.java ../src/core/TSSubQuery.java ../src/core/WritableDataPoints.java ../src/graph/Plot.java ../src/meta/Annotation.java ../src/meta/TSMeta.java ../src/meta/TSUIDQuery.java ../src/meta/UIDMeta.java ../src/query/QueryUtil.java ../src/query/filter/TagVFilter.java ../src/query/filter/TagVLiteralOrFilter.java ../src/query/filter/TagVNotKeyFilter.java ../src/query/filter/TagVNotLiteralOrFilter.java ../src/query/filter/TagVRegexFilter.java ../src/query/filter/TagVWildcardFilter.java ../src/search/SearchPlugin.java ../src/search/SearchQuery.java ../src/search/TimeSeriesLookup.java ../src/stats/Histogram.java ../src/stats/StatsCollector.java ../src/stats/QueryStats.java ../src/tools/ArgP.java ../src/tools/CliOptions.java ../src/tools/CliQuery.java ../src/tools/CliUtils.java ../src/tools/DumpSeries.java ../src/tools/Fsck.java ../src/tools/FsckOptions.java ../src/tools/MetaPurge.java ../src/tools/MetaSync.java ../src/tools/Search.java ../src/tools/TSDMain.java ../src/tools/TextImporter.java ../src/tools/TreeSync.java ../src/tools/UidManager.java ../src/tree/Branch.java ../src/tree/Leaf.java ../src/tree/Tree.java ../src/tree/TreeBuilder.java ../src/tree/TreeRule.java ../src/tsd/AbstractHttpQuery.java ../src/tsd/AnnotationRpc.java ../src/tsd/BadRequestException.java ../src/tsd/ConnectionManager.java ../src/tsd/GnuplotException.java ../src/tsd/GraphHandler.java ../src/tsd/HttpJsonSerializer.java ../src/tsd/HttpSerializer.java ../src/tsd/HttpQuery.java ../src/tsd/HttpRpc.java ../src/tsd/HttpRpcPlugin.java ../src/tsd/HttpRpcPluginQuery.java ../src/tsd/LineBasedFrameDecoder.java ../src/tsd/LogsRpc.java ../src/tsd/PipelineFactory.java ../src/tsd/PutDataPointRpc.java ../src/tsd/QueryRpc.java ../src/tsd/RpcHandler.java ../src/tsd/RpcPlugin.java ../src/tsd/RpcManager.java ../src/tsd/RTPublisher.java ../src/tsd/SearchRpc.java ../src/tsd/StaticFileRpc.java ../src/tsd/StatsRpc.java ../src/tsd/StorageExceptionHandler.java ../src/tsd/SuggestRpc.java ../src/tsd/TelnetRpc.java ../src/tsd/TreeRpc.java ../src/tsd/UniqueIdRpc.java ../src/tsd/WordSplitter.java ../src/uid/FailedToAssignUniqueIdException.java ../src/uid/NoSuchUniqueId.java ../src/uid/NoSuchUniqueName.java ../src/uid/RandomUniqueId.java ../src/uid/UniqueId.java ../src/uid/UniqueIdInterface.java ../src/utils/ByteArrayPair.java ../src/utils/Config.java ../src/utils/DateTime.java ../src/utils/Exceptions.java ../src/utils/FileSystem.java ../src/utils/JSON.java ../src/utils/JSONException.java ../src/utils/Pair.java ../src/utils/PluginLoader.java ../src/utils/Threads.java ../src/tools/BuildData.java
    warning: [options] bootstrap class path not set in conjunction with -source 1.6
    ../src/uid/UniqueId.java:197: warning: [dep-ann] deprecated item is not annotated with @Deprecated
      public long maxPossibleId() {
                  ^
    ../src/uid/UniqueId.java:441: warning: [rawtypes] found raw type: Deferred
          final Deferred d;
                ^
      missing type arguments for generic class Deferred<T>
      where T is a type-variable:
        T extends Object declared in class Deferred
    ../src/core/TSSubQuery.java:237: warning: [dep-ann] deprecated item is not annotated with @Deprecated
      public Map<String, String> getTags() {
                                 ^
    ../src/core/TSSubQuery.java:295: warning: [dep-ann] deprecated item is not annotated with @Deprecated
      public void setTags(Map<String, String> tags) {
                  ^
    ../src/meta/TSUIDQuery.java:92: warning: [dep-ann] deprecated item is not annotated with @Deprecated
      public TSUIDQuery(final TSDB tsdb) {
             ^
    ../src/meta/TSUIDQuery.java:226: warning: [dep-ann] deprecated item is not annotated with @Deprecated
      public void setQuery(final String metric, final Map<String, String> tags) {
                  ^
    ../src/meta/TSUIDQuery.java:550: warning: [dep-ann] deprecated item is not annotated with @Deprecated
      public static Deferred<IncomingDataPoint> getLastPoint(final TSDB tsdb, 
                                                ^
    ../src/tools/Fsck.java:809: warning: [cast] redundant cast to float
                    Float.floatToRawIntBits((float)value_as_float));
                                            ^
    ../src/tools/Fsck.java:853: warning: [cast] redundant cast to float
                    Float.floatToRawIntBits((float)value_as_float));
                                            ^
    ../src/tsd/AbstractHttpQuery.java:129: warning: [deprecation] getHeaders() in HttpMessage has been deprecated
            request.getHeaders().size());
                   ^
    ../src/tsd/AbstractHttpQuery.java:130: warning: [deprecation] getHeaders() in HttpMessage has been deprecated
        for (final Entry<String, String> header : request.getHeaders()) {
                                                         ^
    ../src/tsd/AbstractHttpQuery.java:155: warning: [deprecation] getHeaders() in HttpMessage has been deprecated
            request.getHeaders().size());
                   ^
    ../src/tsd/AbstractHttpQuery.java:156: warning: [deprecation] getHeaders() in HttpMessage has been deprecated
        for (final Entry<String, String> header : request.getHeaders()) {
                                                         ^
    ../src/tsd/AbstractHttpQuery.java:451: warning: [deprecation] toStringHelper(Object) in Objects has been deprecated
        return Objects.toStringHelper(this)
                      ^
    ../src/tsd/QueryRpc.java:374: warning: [deprecation] getLastPoint(TSDB,byte[],boolean,int,long) in TSUIDQuery has been deprecated
              deferreds.add(TSUIDQuery.getLastPoint(tsdb, entry.getKey(), 
                                      ^
    ../src/tsd/GraphHandler.java:188: warning: [rawtypes] found raw type: HashSet
        final HashSet<String>[] aggregated_tags = new HashSet[nqueries];
                                                      ^
      missing type arguments for generic class HashSet<E>
      where E is a type-variable:
        E extends Object declared in class HashSet
    ../src/tsd/LogsRpc.java:115: warning: [cast] redundant cast to ILoggingEvent
            final ILoggingEvent event = (ILoggingEvent) logbuf.get(nevents);
                                        ^
    ../src/utils/JSON.java:214: warning: [deprecation] createJsonParser(String) in JsonFactory has been deprecated
          return jsonMapper.getFactory().createJsonParser(json);
                                        ^
    ../src/utils/JSON.java:236: warning: [deprecation] createJsonParser(byte[]) in JsonFactory has been deprecated
          return jsonMapper.getFactory().createJsonParser(json);
                                        ^
    ../src/utils/JSON.java:258: warning: [deprecation] createJsonParser(InputStream) in JsonFactory has been deprecated
          return jsonMapper.getFactory().createJsonParser(json);
                                        ^
    ../src/utils/PluginLoader.java:69: warning: [rawtypes] found raw type: Class
      private static final Class<?>[] PARAMETER_TYPES = new Class[] {
                                                            ^
      missing type arguments for generic class Class<T>
      where T is a type-variable:
        T extends Object declared in class Class
    22 warnings
    make: *** [all] Error 2
    
    bug 
    opened by CamJN 22
  • Handle duplicate timestamps

    Handle duplicate timestamps

    Duplicate timestamps are handled automatically with last-written wins. In addition, the compaction code is cleaned up and unified - instead of having separate code paths for trivial and complex compactions, there is only one code path. However, it is faster than the trivial path was before, particularly for large rows - on my laptop, a row of 3600 1-sample values was 10-20% faster, and 3600000 1-sample values was nearly twice as fast. It is even faster compared to the previous complex path.

    opened by jtamplin 22
  • [Development] Improve the OpenTSDB Build System

    [Development] Improve the OpenTSDB Build System

    Hello maintainers,

    I would love to contribute to OpenTSDB but setting up development environment is too complex. Is there a simpler way of setting up a local OpenTSDB environment and developing on top of it.

    This would be great for the community and we can have rapid development on the project.

    feature request 
    opened by utkarshcmu 21
  • Rate Options seem to not be working via the HTTP API

    Rate Options seem to not be working via the HTTP API

    resetValue inside rateOptions does not seem to be working. For example with the following query I get values that exceed 10,000:

    curl -X POST -H "Content-Type: application/json" -d '{"start":"2014/02/06-22:23:25","end":"2014/02/06-22:31:25","queries":[{"aggregator":"sum","metric":"haproxy.frontend.stot","rate":true,"rateOptions":{"counter":true,"resetValue":10000}}]}' http://ny-devtsdb04:4242/api/query [{"metric":"haproxy.frontend.stot","tags":{"svname":"FRONTEND"},"aggregateTags":["pxname","tier","host"],"dps":{"1391725408":1011.9333333333335,"1391725419":1012.2000000000002,"1391725423":1030.6000000000004,"1391725434":1030.5333333333335,"1391725438":986.4666666666669,"1391725439":979.2875,"1391725449":979.2875,"1391725454":943.3666666666669,"1391725469":984.7333333333333,"1391725482":-5216934.290909092,"1391725484":-5216906.624242427,"1391725497":1011.9757575757577,"1391725499":1000.3090909090909,"1391725514":1021.309090909091,"1391725523":1021.2475524475525,"1391725529":1052.047552447553,"1391725538":-789292.7583333332,"1391725544":-789342.5583333333,"1391725553":1002.5333333333334,"1391725559":992.3333333333331,"1391725568":992.1999999999999,"1391725570":992.4999999999999,"1391725574":1014.3,"1391725585":-3020989.6645833333,"1391725589":-3021003.397916666,"1391725600":1000.2,"1391725604":1047.2666666666669,"1391725615":1047.6000000000001,"1391725619":1042.4666666666667,"1391725630":1042.2,"1391725634":1018.8000000000002,"1391725645":1019.0000000000002,"1391725649":1022.9333333333336,"1391725660":1022.666666666667,"1391725664":1022.4000000000002,"1391725675":1022.8666666666669,"1391725679":1000.3333333333333,"1391725690":999.7999999999998,"1391725694":1015.9333333333335,"1391725705":1016.0000000000001,"1391725709":1049.0666666666668,"1391725710":1033.2749999999999,"1391725720":1033.8083333333334,"1391725725":964.9000000000002,"1391725735":964.7000000000003,"1391725740":1000.6666666666667,"1391725751":1000.3000000000001,"1391725755":792.7666666666669,"1391725766":3229.5333333333333,"1391725770":2439.8000000000006,"1391725781":3102.733333333334,"1391725796":3063.4000000000005,"1391725811":1604.4666666666667,"1391725826":1037.3333333333333,"1391725830":-8446318.749999998,"1391725841":-8446356.416666664,"1391725845":999.1333333333336,"1391725856":992.4000000000004,"1391725860":992.5333333333338,"1391725871":1009.4000000000005,"1391725876":1009.766666666667}}]

    Note the 1391725830":-8446318.749999998,"1391725841":-8446356.416666664.

    When I pass what I believe to be the same query via the web interface, it seems to be working:

    #start=2014/02/06-22:23:25&end=2014/02/06-22:31:25&m=sum:rate%7Bcounter,,10000%7D:haproxy.frontend.stot&o=&yrange=%5B-1000:%5D&wxh=800x600

    image

    Am I missing some difference between the two, or is there something not working here?

    bug 
    opened by kylebrandt 21
  • add

    add "top-n" capabilities

    Hi, when dealing with a lot of metrics (esp. tag=*) it'd be nice to be able to say "give me only the top-n entries", main reason being that even with more than an handful of metrics the graph is likely to be cluttered and having an indication of what are the top-n metrics could be useful (e.g. latency graphs to identify outliers) Not sure how easy it is to implement this however and/or it can be done on hbase side though

    feature request hard 
    opened by filippog 20
  • Release request 2.3 RC1

    Release request 2.3 RC1

    Please release 2.3. RC1 if possible sometime in near future. Will test query matching and expressions using Grafana with that release candidate.

    Thanks

    question 
    opened by utkarshcmu 19
  • Tiered compaction

    Tiered compaction

    Made corresponding code changes

    1. Added OpenTSDB Timestamp as HBase Cell Timestamp while writing a metric
    2. For OpenTSDB compaction, it will use the maximum Timestamp out of all the stamps being compacted and use it to write for HBase Cell Timestamp.
    3. Adding TS to scanner.
    enhancement 
    opened by karanmehta93 18
  • org.hbase.async.RemoteException with CDH 5.7

    org.hbase.async.RemoteException with CDH 5.7

    Similar to the last comment in #772 , since upgrading to CDH 5.7, I am getting this constantly in my logs:

    org.hbase.async.RemoteException: Call queue is full on /0.0.0.0:60020, too many items queued ? org.hbase.async.RemoteException: Call queue is full on /0.0.0.0:60020, too many items queued ? Caused by: org.hbase.async.RemoteException: Call queue is full on /0.0.0.0:60020, too many items queued ?

    I ran it with DEBUG, and most of the time, especially for writes, i see it selecting the correct region servers: id: 0x231245fe, /10.10.7.101:33714 => /10.10.7.36:60020] Sending RPC #26

    but i can't seem to figure out why it's trying to go to 0.0.0.0 ..... and, this just cropped up when I upgraded to CDH 5.7 :/

    question 
    opened by devaudio 17
  • start tsdb problem

    start tsdb problem

    When I run the following command: ./build/tsdb tsd --port=4242 --staticroot=build/staticroot --cachedir="/tmp/tsdtmp" --zkquorum ubuntu23

    The follow errors appeared: 2014-02-13 18:19:39,545 INFO [main] ZooKeeper: Initiating client connection, connectString=ubuntu23 sessionTimeout=5000 watcher=org.hbase.async.HBaseClient$ZKClient@6adb93a2 2014-02-13 18:19:39,565 INFO [main] HBaseClient: Need to find the -ROOT- region 2014-02-13 18:19:39,570 INFO [main-SendThread(ubuntu23:2181)] ClientCnxn: Opening socket connection to server ubuntu23/10.45.33.23:2181. Will not attempt to authenticate using SASL (unknown error) 2014-02-13 18:19:39,574 INFO [main-SendThread(ubuntu23:2181)] ClientCnxn: Socket connection established to ubuntu23/10.45.33.23:2181, initiating session 2014-02-13 18:19:39,601 INFO [main-SendThread(ubuntu23:2181)] ClientCnxn: Session establishment complete on server ubuntu23/10.45.33.23:2181, sessionid = 0x442a5b00940013, negotiated timeout = 6000 2014-02-13 18:19:39,615 ERROR [main-EventThread] HBaseClient: The znode for the -ROOT- region doesn't exist! 2014-02-13 18:19:40,618 ERROR [main-EventThread] HBaseClient: The znode for the -ROOT- region doesn't exist! 2014-02-13 18:19:41,637 ERROR [main-EventThread] HBaseClient: The znode for the -ROOT- region doesn't exist! 2014-02-13 18:19:42,658 ERROR [main-EventThread] HBaseClient: The znode for the -ROOT- region doesn't exist!

    I searched on internet,but I dont find any solutions.

    Is there anybody met this problem?

    opened by boastcao 17
  • Solving security vulnerabilities in the dependencies of opentsdb

    Solving security vulnerabilities in the dependencies of opentsdb

    In my company, we are using OpenTSDB. Our primary concern right now is to solve the security vulnerabilities in the software which we are using. I have listed down the vulnerable packages below. Please suggest me how to upgrade those packages to the latest version in OpenTSDB?

    ch.qos.logback:logback-core commons-collections:commons-collections ch.qos.logback:logback-classic com.fasterxml.jackson.core:jackson-databind com.google.protobuf:protobuf-java org.apache.zookeeper:zookeeper org.apache.httpcomponents:httpclient net.sourceforge.htmlunit:htmlunit io.netty:netty com.google.guava:guava commons-io:commons-io commons-codec:commons-codec junit:junit

    security 
    opened by Ashwini864 4
  • OpenTSDB 2.4.1 Remote Code Execution

    OpenTSDB 2.4.1 Remote Code Execution

    During our research at Oxeye Security, we found that OpenTSDB is vulnerable to Remote Code Execution vulnerability by writing user-controlled input to Gnuplot configuration file and running Gnuplot with the generated configuration.

    As we don't want to publish zero days on the web without first contacting you, please provide us with a secure email address so we can communicate the description, reproduction steps, and more.

    This vulnerability was discovered by Gal Goldshtein and Daniel Abeles.

    opened by oxeye-daniel 5
  • do we need yarn for opentsdb

    do we need yarn for opentsdb

    Hi Team, I read from the documentation that opentsdb only need's hdfs and hbase. Do i really need yarn in production setup?...if i am only setting up haddop to use opentsdb do i really need yarn? why i need yarn if it is required? please can anyone explain me. Thanks and regards, Vadiraj

    opened by vadirajks 1
  • Fixed incorrect result of Count Downsampler when setting Rollup data

    Fixed incorrect result of Count Downsampler when setting Rollup data

    Issue

    When using Count Downsampler in an environment where Rollup data is pre-aggregated, incorrect results are returned.

    In the case of Raw data, the number of valid data that existed during the time of the Downsample target is being counted. This implementation is considered correct.

    However, for Rollup data, this value should be output since pre-aggregated data already exists.

    The existing implementation does not yield the correct aggregate data because Count Downsample is done in the same process as Raw data, even in cases where Rollup data is used.

    Resolution

    When Count Downsample is used and Rollup data is used, fixed to output the value of pre-aggregated data.

    opened by funamoto 0
  • From security review

    From security review

    Per documentation, the argumentless SecureRandom constructor seems sufficient to seed the PRNG instance with true-random information. Thoughts?

    https://docs.oracle.com/javase/8/docs/api/java/security/SecureRandom.html

    opened by SeanPMiller 1
Releases(v2.4.1)
  • v2.4.1(Sep 2, 2021)

    • Version 2.4.1 (2021-09-02)

    Noteworthy Changes:

    • Add a config flag to enable splitting queries across the rollup and raw data tables. Another config determines when to split the queries.
    • Fix for CVE-2020-35476 that now validates and limits the inputs for Gnuplot query parameters to prevent remote code execution.
    • Default log config will log CLI tools at INFO to standard out.
    • New check_tsd_v2 script that evaluates individual metric groups given a filter.
    • Collect stats from meta cache plugins.
    • Add a python script to list and pretty-print queries running on a TSD.
    • Add a single, standalone TSD systemd startup script and default to that instead of the multi-port TSD script.

    Bug Fixes:

    • Fix the "--sync" flag for FSCK to wait for repairs to execute against storage before proceeding.
    • Fix expression queries to allow metric only and filterless queries.
    • Fix an NPE in /api/query/last when appends are enabled.
    • Fix races in the salt scanner and multigets on the storage maps.
    • Fix rollup queries that need sum and count for downsampling instead of group bys.
    • Fix fuzzy row filters for later versions of HBase by using filter pairs. And allow it to be combined with a regex filter.
    • Fix stats from the individual salt scanners and overall query stats concurrency issues.
    • Rename the stat "maxScannerUidtoStringTime" to "maxScannerUidToStringTime"
    • Fix the min initial value for double values in the AggregationIterator
    • Fix rollup queries producing unexpected results.
    • Fix various UTs
    • Support rollups in FSCK
    Source code(tar.gz)
    Source code(zip)
    opentsdb-2.4.1-1-20210902183110-root.noarch.rpm(12.38 MB)
    opentsdb-2.4.1_all.deb(11.97 MB)
  • v2.4.0(Dec 17, 2018)

    • Version 2.4.0 (2018-12-16)

    Noteworthy Changes:

    • Set default data block encoding to DIFF in the create table script.
    • Add callbacks to log errors in the FSCK tool when a call was made to fix something.
    • Add a sum of squares aggregator "squareSum".
    • Add the diff aggregator that computes the difference between the first and last values.
    • Add a SystemD template to the RPM package.
    • Allow tags to be added via HTTP header.
    • Add example implementations for the Authorization and Authentication plugins.
    • Change tsd.storage.use_otsdb_timestamp to default to false.
    • Literal or filter now allows single character values.
    • Rollup query code now only uses the downsampler value to pick an interval.
    • Add jdk 8 in the debian script.
    • Setup fill policies in the Nagios check

    Bug Fixes:

    • Fix rollup scanner filter for single aggregate queries.
    • Fix FSCK HBase timestamps when deduping. Sometimes they were negative.
    • Fix exception handling when writing data over HTTP with the sync flag enabled.
    • Fix missing source files in the Makefile.
    • Change UID cache to longs from ints and add hit and miss counters.
    • Fix HighestCurrent returning the wrong results.
    • Fix running query stats queryStart timestamp to millis.
    • Fix TimeShift millisecond bug.
    • Fix post remove step in the debian package.
    Source code(tar.gz)
    Source code(zip)
    opentsdb-2.4.0.noarch.rpm(12.37 MB)
    opentsdb-2.4.0.tar.gz(75.65 MB)
    opentsdb-2.4.0_all.deb(11.95 MB)
  • v2.3.2(Dec 17, 2018)

    • Version 2.3.2 (2018-12-16)

    Noteworthy Changes:

    • A new Python wrapper script to make FSCK repair runs easier.
    • Track performance in the Nagios/Icinga script
    • Add a Contributions file.
    • Add a config, 'tsd.core.bulk.allow_out_of_order_timestamps' to allow out of order timestamps for bulk ingest.
    • NOTE: This version also includes a JDK 8 compiled version of Jackson due to security patches. If you need to run with an older JDK please replace the Jackson JARs with older versions.

    Bug Fixes:

    • Unwrap NoSuchUniqueIds when writing data points to make it easier to understand exceptions.
    • Fix an NPE in the PutDataPointRpc class if a data point in the list is null.
    • Fix a Makefile error in the clean portion.
    • Fix an NPOE in the UIDManager print result.
    • Fix a bug in the UI where Y formats may contain a percent sign.
    • Allow specifying the data block encoding and TTL in the HBase table creation script.
    • Change the make and TSDB scripts to use relative paths.
    • Fix parsing of use_meta from the URI for the search endpoint.
    • Fix the clean cache script to be a bit more OS agnostic.
    Source code(tar.gz)
    Source code(zip)
    opentsdb-2.3.2.noarch.rpm(11.75 MB)
    opentsdb-2.3.2.tar.gz(75.08 MB)
    opentsdb-2.3.2_all.deb(11.39 MB)
  • v2.3.1(May 22, 2018)

    • Version 2.3.1 (2018-04-21)

    Noteworthy Changes:

    • When setting up aggregators, advance to the first data point equal to or greater than the query start timestamp. This helps with calendar downsampling intervals.
    • Add support to the Nagios check script for downsampling fill policies.

    Bug Fixes:

    • Fix expression calculation by avoiding double execution and checking both output types for boolean values.
    • Fixing missing tools scripts in builds.
    • Default HBase 1.2.5 in the OSX install script
    • Upgrade AsyncBigtable to 0.3.1
    • Log query stats when a channel is closed unexpectedly.
    • Add the Java 8 path in the debian init script and remove Java 6.
    • Pass the column family name to the get requests in the compaction scheduler.
    • Fix a comparison issue in the UI on group by tags.
    • Filter annotation queries by the starting timestamp, excluding those in a row that began before the query start time.
    • Tiny stap at purging backticks from Gnuplot scripts.
    • Remove the final annotation from the meta classes so they can be extended.
    • Fix the javacc maven plugin version.
    • Fix the literal or filter to allow single character filters.
    • Fix query start stats logging to use the ms instead of nano time.
    • Move Jackson and Netty to newer versions for security reasons.
    • Upgrade to AsyncHBase 1.8.2 for compatibility with HBase 1.3 and 2.0
    • Fix the Highest Current calculation to handle empty time series.
    • Change the cache hits counters to longs.
    Source code(tar.gz)
    Source code(zip)
    opentsdb-2.3.1.noarch.rpm(11.75 MB)
    opentsdb-2.3.1.tar.gz(75.08 MB)
    opentsdb-2.3.1_all.deb(11.39 MB)
  • v2.4.0RC2(Oct 9, 2017)

    • Version 2.4.0 RC2 (2017-10-08)

    Noteworthy Changes:

    • Modify the RPC handler plugin system so that it parses only the first part of the URI instead of the entire path. Now plugins can implement sub-paths.
    • Return the HTML 5 doctype for built-in UI pages
    • Add an optional byte and/or data point limit to the amount of data fetched from storage. This allows admins to prevent OOMing TSDs due to massive queries.
    • Allow a start time via config when enabling the date tiered compaction in HBase
    • Provide the option of using an LRU for caching UIDs to avoid OOMing writers and readers with too many strings
    • Optionally avoid writing to the forward or reverse UID maps when a specific TSD operational mode is enabled to avoid wasting memory on maps that will never be used.

    Bug Fixes:

    • Roll back UTF8 issue with UIDs in RC1 wherein the stored bytes weren't converting properly and vice-versa. We'll have to work on full UTF8 support in 3.x
    • Fix a build issue for Javacc
    • Add Kryo as a dependency to the fat jar
    • Javadoc fixes
    • Fix an issue with calendar aligned downsampling by seeking to the start time of the query when the zone-aligned timestamp may be earlier than the query start time
    • Add the missing QueryLimitOverride to the makefile
    • Fix compatibility with Bigtable for 2.4
    • Enable standard read-only APIs when the TSD is in write only mode
    Source code(tar.gz)
    Source code(zip)
    opentsdb-2.4.0RC2.noarch.rpm(11.99 MB)
    opentsdb-2.4.0RC2_all.deb(11.57 MB)
  • v2.3.0(Dec 31, 2016)

  • v2.2.2(Dec 29, 2016)

    • Version 2.2.2 (2016-12-29)

    Bug Fixes:

    • Fix an issue with writing metadata where using custom tags could cause the compare- and-set to fail due to variable ordering in Java's heap. Now tags are sorted so the custom tag ordering will be consistent.
    • Fix millisecond queries that would miss data the top of the final hour if the end time was set to 1 second or less than the top of that final hour.
    • Fix a concurrent modification issue where salt scanners were not synchronized on the annotation map and could cause spinning threads.
    Source code(tar.gz)
    Source code(zip)
    opentsdb-2.2.2.rpm(9.89 MB)
    opentsdb-2.2.2.tar.gz(73.23 MB)
    opentsdb-2.2.2_all.deb(9.57 MB)
  • v2.3.0RC2(Oct 8, 2016)

    • Version 2.3.0 RC2 (2016-10-08)

    Noteworthy Changes:

    • Added a docker file and tool to build TSD docker containers (#871).
    • Log X-Forwarded-For addresses when handling HTTP requests.
    • Expand aggregator options in the Nagios check script.
    • Allow enabling or disabling the HTTP API or UI.
    • TSD will now exit when an unrecognized CLI param is passed.

    Bug Fixes:

    • Improved ALPN version detection when using Google Bigtable.
    • Fix the DumpSeries class to support appended data point types.
    • Fix queries where groupby is set to false on all filters.
    • Fix a missing attribute in the Nagios check script (#728).
    • Fix a major security bug where requesting a PNG with certain URI params could execute code on the host (#793, #781).
    • Return a proper error code when dropping caches with the DELETE HTTP verb (#830).
    • Fix backwards compatibility with HBase 0.94 when using explicit tags by removing the fuzzy filter (#837).
    • Fix an RPM build issue when creating the GWT directory.
    Source code(tar.gz)
    Source code(zip)
    opentsdb-2.3.0-RC2.tar.gz(74.44 MB)
    opentsdb-2.3.0-RC2_all.deb(10.80 MB)
    opentsdb-2.3.0_RC2.rpm(11.14 MB)
  • v2.2.1(Oct 8, 2016)

    • Version 2.2.1 (2016-10-08)

    Noteworthy Changes

    • Generate an incrementing TSMeta request only if both enable_tsuid_incrementing and tsd.core.meta.enable_realtime_ts are enabled. Previously, increments would run regardless of whether or not the real time ts setting was enabled. If tsuid incrementing is disabled then a get and optional put is executed each time without modifying the meta counter field.
    • Improve metadata storage performance by removing an extra getFromStorage() call.
    • Add global Annotations to the gnuplot graphs (#773)
    • Allow creation of a TSMeta object without a TSUID (#778)
    • Move to AsyncHBase 1.7.2

    Bug Fixes:

    • Fix Python scripts to use the environment directory.
    • Fix config name for "tsd.network.keep_alive" in included config files.
    • Fix an issue with the filter metric and tag resolution chain during queries.
    • Fix an issue with malformed, double dotted timestamps (#724).
    • Fix an issue with tag filters where we need a copy before modifying the list.
    • Fix comments in the config file around TCP no delay settings.
    • Fix some query stats calculations around averaging and estimating the number of data points (#784).
    • Clean out old .SWO files (#821)
    • Fix a live-lock situation when performing regular expression or wildcard queries (#823).
    • Change the static file path for the HTTP API to be relative (#857).
    • Fix an issue where the GUI could flicker when two or more tag filters were set (#708).
    Source code(tar.gz)
    Source code(zip)
    opentsdb-2.2.1.rpm(9.89 MB)
    opentsdb-2.2.1.tar.gz(73.24 MB)
    opentsdb-2.2.1_all.deb(9.57 MB)
  • v2.3.0RC1(May 2, 2016)

    • Version 2.3.0 RC1 (2016-05-02)

    Noteworthy Changes:

    • Introduced option --max-connection/tsd.core.connections.limit to set the maximum number of connection a TSD will accept (#638)
    • 'tsdb import' can now read from stdin (#580)
    • Added datapoints counter (#369)
    • Improved metadata storage performance (#699)
    • added checkbox for showing global annotations in UI (#736)
    • Added startup plugins, can be used for Service Discovery or other integration (#719)
    • Added MetaDataCache plugin api
    • Added timeshift() function (#175)
    • Now align downsampling to Gregorian Calendar (#548, #657)
    • Added None aggregator to fetch raw data along with first and last aggregators to fetch only the first or last data points when downsampling.
    • Added script to build OpenTSDB/HBase on OSX (#674)
    • Add cross-series expressions with mathematical operators using Jexl
    • Added query epxressions (alias(), scale(), absolute(), movingAverage(), highestCurrent(), highestMax(), timeShift(), divide(), sum(), difference(), multiply()) (#625)
    • Add a Unique ID assignment filter API for enforcing UID assignment naming conventions.
    • Add a whitelist regular expression based UID assignment filter
    • Add a time series storage filter plugin API that allows processing time series data and determining if it should be stored or not.
    • Allow using OpenTSDB with Google's Bigtable cloud platform or with Apache Cassandra

    Bug Fixes:

    • Some improperly formatted timestamps were allowed (#724)
    • Removed stdout logging from packaged logback.xml files (#715)
    • Restore the ability to create TSMeta objects via URI
    • Restore raw data points (along with post-filtered data points) in query stats
    • Built in UI will now properly display global annotations when the query string is passed
    Source code(tar.gz)
    Source code(zip)
    opentsdb-2.3.0-RC1.tar.gz(74.43 MB)
    opentsdb-2.3.0-RC1_all.deb(10.78 MB)
    opentsdb-2.3.0_RC1.noarch.rpm(11.13 MB)
  • v2.2.0(Feb 15, 2016)

    • Version 2.2.0 (2016-02-14)
      • Add the option to randomly assign UIDs to metrics to improve distribution across HBase region servers.
      • Introduce salting of data to improve distribution of high cardinality regions across region servers.
      • Introduce query stats for tracking various timings related to TSD queries.
      • Add more stats endpoints including /threads, /jvm and /region_clients
      • Allow for deleting UID mappings via CLI or the API
      • Name the various threads for easier debugging, particularly for distinguishing between TSD and AsyncHBase threads.
      • Allow for pre-fetching all of the meta information for the tables to improve performance.
      • Update to the latest AsyncHBase with support for secure HBase clusters and RPC timeouts.
      • Allow for overriding metric and tag widths via the config file. (Be careful!)
      • URLs from the API are now relative instead of absolute, allowing for easier reverse proxy use.
      • Allow for percent deviation in the Nagios check
      • Let queries skip over unknown tag values that may not exist yet (via config)
      • Add various query filters such as case (in)sensitive pipes, wildcards and pipes over tag values. Filters do not work over metrics at this time.
      • Add support for writing data points using Appends in HBase as a way of writing compacted data without having to read and re-write at the top of each hour.
      • Introduce an option to emit NaNs or Nulls in the JSON output when downsampling and a bucket is missing values.
      • Introduce query time flags to show the original query along with some timing stats in the response.
      • Introduce a storage exception handler plugin that will allow users to spool or requeue data points that fail writes to HBase due to various issues.
      • Rework the HTTP pipeline to support plugins with RPC implementations.
      • Allow for some style options in the Gnuplot graphs.
      • Allow for timing out long running HTTP queries.
      • Text importer will now log and continue bad rows instead of failing.
      • New percentile and count aggregators.
      • Add the /api/annotations endpoint to fetch multiple annotations in one call.
      • Add a class to support improved bulk imports by batching requests in memory for a full hour before writing.
      • Allow overriding the metric and tag UID widths via config file instead of having to modify the source code.
      • Rework the QueryStats output to be a bit more useful and add timings from the various scanners and query components.
      • Modify the UI to allow for group by or aggregate per tag (use the new query feature)
      • Rework the UI skin with the new TSDB logo and color scheme.
      • Add the QueryLog config to logback.xml so users can optionally enable logging of all queries along with their stats.
    Source code(tar.gz)
    Source code(zip)
    opentsdb-2.2.0.noarch.rpm(9.88 MB)
    opentsdb-2.2.0.tar.gz(73.23 MB)
    opentsdb-2.2.0_all.deb(9.56 MB)
  • v2.1.4(Feb 14, 2016)

    • Version 2.1.4 (2016-02-14)

    Bug Fixes:

    • Fix the meta table where the UID/TSMeta APIs were not sorting tags properly prior to creating the row key, thus allowing for duplicates if the caller changed the order of tags.
    • Fix a situation where meta sync could hang forever if a routine threw an exception.
    • Fix an NPE thrown when accessing the /logs endpoint if the Cyclic appender is not enabled in the logback config.
    • Remove an overly chatty log line in TSMeta on new time series creation.
    Source code(tar.gz)
    Source code(zip)
    opentsdb-2.1.4.noarch.rpm(7.79 MB)
    opentsdb-2.1.4.tar.gz(71.14 MB)
    opentsdb-2.1.4_all.deb(7.49 MB)
  • v2.2.0RC3(Nov 11, 2015)

  • v2.1.3(Nov 11, 2015)

  • v2.2.0RC2(Nov 10, 2015)

    • Version 2.2.0 RC2 (2015-11-09)

    Noteworthy Changes:

    • Allow overriding the metric and tag UID widths via config file instead of having to modify the source code.

    Bug Fixes:

    • OOM handling script now handles multiple TSDs installed on the same host.
    • Fix a bug where queries never return if an exception is thrown from the storage layer.
    • Fix random metric UID assignment in the CLI tool.
    • Fix for meta data sync when salting is enabled.
    Source code(tar.gz)
    Source code(zip)
    opentsdb-2.2.0RC2.noarch.rpm(9.33 MB)
    opentsdb-2.2.0RC2.tar.gz(73.05 MB)
    opentsdb-2.2.0RC2_all.deb(9.39 MB)
  • v2.1.2(Nov 10, 2015)

  • v2.2.0RC1(Sep 12, 2015)

    • Version 2.2.0 RC1 (2015-09-12)

    Noteworthy Changes:

    • Add the option to randomly assign UIDs to metrics to improve distribution across HBase region servers.
    • Introduce salting of data to improve distribution of high cardinality regions across region servers.
    • Introduce query stats for tracking various timings related to TSD queries.
    • Add more stats endpoints including /threads, /jvm and /region_clients
    • Allow for deleting UID mappings via CLI or the API
    • Name the various threads for easier debugging, particularly for distinguishing between TSD and AsyncHBase threads.
    • Allow for pre-fetching all of the meta information for the tables to improve performance.
    • Update to the latest AsyncHBase with support for secure HBase clusters and RPC timeouts.
    • Allow for overriding metric and tag widths via the config file. (Be careful!)
    • URLs from the API are now relative instead of absolute, allowing for easier reverse proxy use.
    • Allow for percent deviation in the Nagios check
    • Let queries skip over unknown tag values that may not exist yet (via config)
    • Add various query filters such as case (in)sensitive pipes, wildcards and pipes over tag values. Filters do not work over metrics at this time.
    • Add support for writing data points using Appends in HBase as a way of writing compacted data without having to read and re-write at the top of each hour.
    • Introduce an option to emit NaNs or Nulls in the JSON output when downsampling and a bucket is missing values.
    • Introduce query time flags to show the original query along with some timing stats in the response.
    • Introduce a storage exception handler plugin that will allow users to spool or requeue data points that fail writes to HBase due to various issues.
    • Rework the HTTP pipeline to support plugins with RPC implementations.
    • Allow for some style options in the Gnuplot graphs.
    • Allow for timing out long running HTTP queries.
    • Text importer will now log and continue bad rows instead of failing.
    • New percentile and count aggregators.
    • Add the /api/annotations endpoint to fetch multiple annotations in one call.
    • Add a class to support improved bulk imports by batching requests in memory for a full hour before writing.

    Bug Fixes:

    • Modify the .rpm build to allow dashes in the name.
    • Allow the Nagios check script to handle 0 values properly in checks.
    • Fix FSCK where floating point values were not processed correctly (#430)
    • Fix missing information from the /appi/uid/tsmeta calls (#498)
    • Fix more issues with the FSCK around deleting columns that were in the list (#436)
    • Avoid OOM issues over Telnet when the sending client isn't reading errors off it's socket fast enough by blocking writes.
    Source code(tar.gz)
    Source code(zip)
    opentsdb-2.2.0RC1.noarch.rpm(9.69 MB)
    opentsdb-2.2.0RC1.tar.gz(36.68 MB)
    opentsdb-2.2.0RC1_all.deb(9.37 MB)
  • 2.1.1(Sep 12, 2015)

  • v2.1.0(May 7, 2015)

    • Version 2.1.0 (2015-05-06)

    Bug Fixes:

    • FSCK was not handling compacted and floating point duplicates properly. Now they are merged correctly.
    • TSMeta data updates were not loading the latest data from storage on response
    • The config class will now trim spaces from booleans and integers
    • On shutdown, the idle state handler could prevent the TSD from shutting down gracefully. A new thread factory sets that thread as a daemon thread.
    • TSMeta objects were not generated if multiple writes for the same data point arrived in succession due to buffering atomic increments. Increments are no longer buffered.
    • Updated paths to the deprecated Google Code repo for dependencies.
    Source code(tar.gz)
    Source code(zip)
    opentsdb-2.1.0.noarch.rpm(7.77 MB)
    opentsdb-2.1.0.tar.gz(71.14 MB)
    opentsdb-2.1.0_all.deb(7.48 MB)
  • v2.1.0RC2(Apr 4, 2015)

    • Version 2.1.0 RC1 (2015-04-04)

    Noteworthy Changes:

    • Handle idle connections in Netty by closing them after some period of inactivity
    • Support compressed HTTP responses

    Bug Fixes:

    • Various RPM script and package fixes
    • Properly handle deletions of the cache directory while running
    • Queries for non-extant metrics now return a 400 error
    • Fix an issue with the meta sync utility where entries were not created if the counter existed
    • Report stats properly when the UID is greater than 3 bytes wide
    • Fix UI hangups when incorrect tags are supplied
    • Log illegal put exceptions at the debug level
    • Fix global annotation retrieval where some entries were missing
    • Fix unit tests that were not properly mocking out clients or threads and causing JVM OOMs
    • Fix accounting bugs in FSCK
    • Sort aggregators in the UI
    • Properly throw an exception if the user supplies an empty tag on storage or retreival
    • Handle missing directories in the Config class
    Source code(tar.gz)
    Source code(zip)
    opentsdb-2.1.0RC2.noarch.rpm(7.49 MB)
    opentsdb-2.1.0RC2.tar.gz(70.25 MB)
    opentsdb-2.1.0RC2_all.deb(7.20 MB)
  • v2.1.0RC1(Nov 10, 2014)

    • Version 2.1.0 RC1 (2014-11-09)

    Noteworthy Changes:

    • Add a server side timeout for sockets that haven't written data in some time
    • Major FSCK utility update to handle new objects, delete bad data and deal with duplicate data points.
    • Optionally preload portions of the name to UID maps at startup
    • Add read and write modes to the TSD to disable writing data points via telnet or HTTP
    • Optionally disable the diediedie commands to prevent users from shutting down a tsd
    • Optionally block the auto creation of tag keys and values
    • Downsampling is now aligned on modulus bondaries so that we avoid interpolation as much as possible. Data returned is now more along the lines of what users expect, e.g. 24 data points for day when downsampled on hourly intervals instead of random points based on the span's timestamps.
    • Add the /api/search/lookup endpoint and CLI endpoint for looking up time series based on the meta or data tables
    • Rework of the TSD compaction code to process compactions faster
    • Optionally handle duplicate data points gracefully during compaction or query time without throwing exceptions
    • Add Allow-Headers CORs support
    Source code(tar.gz)
    Source code(zip)
    opentsdb-2.1.0RC1.noarch.rpm(7.48 MB)
    opentsdb-2.1.0RC1.tar.gz(70.31 MB)
    opentsdb-2.1.0RC1_all.deb(7.47 MB)
  • v2.0.0RC2(Sep 30, 2013)

  • v2.0.0RC1(Aug 5, 2013)

Owner
OpenTSDB
The Open Source Time Series Data Base
OpenTSDB
Scalable Time Series Data Analytics

Time Series Data Analytics Working with time series is difficult due to the high dimensionality of the data, erroneous or extraneous data, and large d

Patrick Schäfer 286 Dec 7, 2022
The Heroic Time Series Database

DEPRECATION NOTICE This repo is no longer actively maintained. While it should continue to work and there are no major known bugs, we will not be impr

Spotify 842 Dec 20, 2022
IoTDB (Internet of Things Database) is a data management system for time series data

English | 中文 IoTDB Overview IoTDB (Internet of Things Database) is a data management system for time series data, which can provide users specific ser

The Apache Software Foundation 3k Jan 1, 2023
An open source SQL database designed to process time series data, faster

English | 简体中文 | العربية QuestDB QuestDB is a high-performance, open-source SQL database for applications in financial services, IoT, machine learning

QuestDB 9.9k Jan 1, 2023
Accumulo backed time series database

Timely is a time series database application that provides secure access to time series data. Timely is written in Java and designed to work with Apac

National Security Agency 367 Oct 11, 2022
The Prometheus monitoring system and time series database.

Prometheus Visit prometheus.io for the full documentation, examples and guides. Prometheus, a Cloud Native Computing Foundation project, is a systems

Prometheus 46.3k Jan 10, 2023
CrateDB is a distributed SQL database that makes it simple to store and analyze massive amounts of machine data in real-time.

About CrateDB is a distributed SQL database that makes it simple to store and analyze massive amounts of machine data in real-time. CrateDB offers the

Crate.io 3.6k Jan 2, 2023
Time series monitoring and alerting platform.

Argus Argus is a time-series monitoring and alerting platform. It consists of discrete services to configure alerts, ingest and transform metrics & ev

Salesforce 495 Dec 1, 2022
Time Series Metrics Engine based on Cassandra

Hawkular Metrics, a storage engine for metric data About Hawkular Metrics is the metric data storage engine part of Hawkular community. It relies on A

Hawkular 230 Dec 9, 2022
The Most Advanced Time Series Platform

Warp 10 Platform Introduction Warp 10 is an Open Source Geo Time Series Platform designed to handle data coming from sensors, monitoring systems and t

SenX 322 Dec 29, 2022
DbLoadgen: A Scalable Solution for Generating Transactional Load Against a Database

DbLoadgen: A Scalable Solution for Generating Transactional Load Against a Database DbLoadgen is scalable solution for generating transactional loads

Qlik Partner Engineering 4 Feb 23, 2022
HurricaneDB a real-time distributed OLAP engine, powered by Apache Pinot

HurricaneDB is a real-time distributed OLAP datastore, built to deliver scalable real-time analytics with low latency. It can ingest from batch data sources (such as Hadoop HDFS, Amazon S3, Azure ADLS, Google Cloud Storage) as well as stream data sources (such as Apache Kafka).

GuinsooLab 4 Dec 28, 2022
Java implementation of Condensation - a zero-trust distributed database that ensures data ownership and data security

Java implementation of Condensation About Condensation enables to build modern applications while ensuring data ownership and security. It's a one sto

CondensationDB 43 Oct 19, 2022
Apache Druid: a high performance real-time analytics database.

Website | Documentation | Developer Mailing List | User Mailing List | Slack | Twitter | Download Apache Druid Druid is a high performance real-time a

The Apache Software Foundation 12.3k Jan 1, 2023
Aggregation query proxy is a scalable sidecar application that sits between a customer application and Amazon Keyspaces/DynamoDB

Aggregation query proxy is a scalable sidecar application that sits between a customer application and Amazon Keyspaces/DynamoDB. It allows you to run bounded aggregation queries against Amazon Keyspaces and DynamoDB services.

AWS Samples 3 Jul 18, 2022
Distributed ID Generate Service

Leaf There are no two identical leaves in the world. — Leibnitz 中文文档 | English Document Introduction Leaf refers to some common ID generation schemes

美团 5.7k Dec 29, 2022
Official repository of Trino, the distributed SQL query engine for big data, formerly known as PrestoSQL (https://trino.io)

Trino is a fast distributed SQL query engine for big data analytics. See the User Manual for deployment instructions and end user documentation. Devel

Trino 6.9k Dec 31, 2022
The official home of the Presto distributed SQL query engine for big data

Presto Presto is a distributed SQL query engine for big data. See the User Manual for deployment instructions and end user documentation. Requirements

Presto 14.3k Dec 30, 2022