Timely is a time series database application that provides secure access to time series data. Timely is written in Java and designed to work with Apache Accumulo and Grafana. Documentation is located here.
Accumulo backed time series database
Overview
Comments
-
0.0.4: Could not find or load main class timely.StandaloneServer
Got this error while run ./bin/timely-standalone.sh:
Could not find or load main class timely.StandaloneServer
Know little about JAVA. Could you let me know anything you need to debug this?
Cheers!
question -
Test grafana timely-app in actual deployment
Most of the development and testing of the grafana app has been done using docker and localhost. We need to test the app using different hosts for everything to ensure we're not getting success because the localhost hostname is everywhere.
perhaps doing this with #47 makes sense.
Grafana -
Docker example update
The docker example wasn't working for me in master so I made some tweaks to get it running and added some of the setup steps (cert creations, grafana datasource/dashboard configs, etc) to the scripts to decrease manual config time.
Steps should now be: mvn package cd ./server;docker-compose up login to grafana and click browser-cert check in the timely datasource.
-
Binary is not portable
I think this is a weird case, but I compiled on OS X, and ran on Linux. Since I compiled on OS X the netty jar doesn't have the correct native netty library but the JVM reports Linux as the OS so timely tries to use the native netty NioEventLoopGroup, instead of the epoll ones.
I think we should allow an option to force epoll over native from some config option. Or we could do netty mode options as epoll,native,auto defaulting to auto so it is the current behavior.
documentation -
Query with no tags specified just returns first series.
Noticed this when using the insert-test-data program. I created charts for each sys.cpu0.user (host=), one aggregated by rack (rack=), which came out fine. Then I created one without any tags and it returned the data for host=r01n01 only. My thought was this should aggregate over all the data to be consistent. This could be computationally expensive and actually be requested a lot when people are creating new charts, so perhaps a special tag should explicitly indicate to aggregate everything (tags=none ?)
In the attached image aggregators are sums. The first chart has peaks around 100, second has peaks around 200, the third should have peaks around 400, but is actually just r01n01 data.
-
support OpenTSDB style aggregation
OpenTSDB allows aggregation of multiple series into a single one, as well as aggregation of values resulting from downsampled values. While Timely supports downsampling, it would be nice if it were also possible to combine multiple series into a single one via aggregation.
enhancement -
certs readme
Create a readme with how to create a CA and all the certs needed to run everything in https, including client cert in browser and interface to authenticate user cert.
documentation -
Extreme query start and stop times produce errors
This is vague and probably more of a reminder to look into this more but I've noticed some errors when adjusting the query timestamps while working with the web socket. My guess is it's not a web socket specific thing, but I haven't dug into it. i was using the web socket query operation and timely standalone docker-compose setup for this. Here's what I've captured so far:
setting start time of 0 to stop time of now fails, even with only a little data in the system.
timely_1 | 2016-09-01 14:30:13,638 WARN [batch scanner 2990- 1 looking up 1 ranges at acd1ffcae3a4:43079] impl.TabletServerBatchReaderIterator (TabletServerBatchReaderIterator.java:run(378)) - Error on server acd1ffcae3a4:43079 timely_1 | org.apache.accumulo.core.client.impl.AccumuloServerException: Error on server acd1ffcae3a4:43079 timely_1 | at org.apache.accumulo.core.client.impl.TabletServerBatchReaderIterator.doLookup(TabletServerBatchReaderIterator.java:695) timely_1 | at org.apache.accumulo.core.client.impl.TabletServerBatchReaderIterator$QueryTask.run(TabletServerBatchReaderIterator.java:349) timely_1 | at org.apache.htrace.wrappers.TraceRunnable.run(TraceRunnable.java:57) timely_1 | at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) timely_1 | at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) timely_1 | at org.apache.accumulo.fate.util.LoggingRunnable.run(LoggingRunnable.java:35) timely_1 | at java.lang.Thread.run(Thread.java:745) timely_1 | Caused by: org.apache.thrift.TApplicationException: Internal error processing continueMultiScan timely_1 | at org.apache.thrift.TApplicationException.read(TApplicationException.java:111) timely_1 | at org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:71) timely_1 | at org.apache.accumulo.core.tabletserver.thrift.TabletClientService$Client.recv_continueMultiScan(TabletClientService.java:344) timely_1 | at org.apache.accumulo.core.tabletserver.thrift.TabletClientService$Client.continueMultiScan(TabletClientService.java:330) timely_1 | at org.apache.accumulo.core.client.impl.TabletServerBatchReaderIterator.doLookup(TabletServerBatchReaderIterator.java:663) timely_1 | ... 6 more timely_1 | 2016-09-01 14:30:13.724 WARN 17 --- [ntLoopGroup-6-2] i.n.c.DefaultChannelPipeline : An exceptionCaught() event was fired, and it reached at the tail of the pipeline. It usually means the last handler in the pipeline did not handle the exception. timely_1 | timely_1 | java.lang.RuntimeException: org.apache.accumulo.core.client.impl.AccumuloServerException: Error on server acd1ffcae3a4:43079 timely_1 | at org.apache.accumulo.core.client.impl.TabletServerBatchReaderIterator.hasNext(TabletServerBatchReaderIterator.java:181) timely_1 | at timely.store.DataStoreImpl.query(DataStoreImpl.java:463) timely_1 | at timely.netty.websocket.timeseries.WSQueryRequestHandler.channelRead0(WSQueryRequestHandler.java:29) timely_1 | at timely.netty.websocket.timeseries.WSQueryRequestHandler.channelRead0(WSQueryRequestHandler.java:17) timely_1 | at io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105) timely_1 | at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) timely_1 | at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:334) timely_1 | at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:327) timely_1 | at io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:108) timely_1 | at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) timely_1 | at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:334) timely_1 | at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:327) timely_1 | at io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:108) timely_1 | at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) timely_1 | at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:334) timely_1 | at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:327) timely_1 | at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:102) timely_1 | at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) timely_1 | at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:334) timely_1 | at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:327) timely_1 | at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:102) timely_1 | at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) timely_1 | at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:334) timely_1 | at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:327) timely_1 | at io.netty.handler.codec.http.websocketx.WebSocketServerProtocolHandler$1.channelRead(WebSocketServerProtocolHandler.java:147) timely_1 | at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) timely_1 | at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:334) timely_1 | at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:327) timely_1 | at io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:266) timely_1 | at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) timely_1 | at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:334) timely_1 | at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:327) timely_1 | at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:102) timely_1 | at io.netty.handler.codec.MessageToMessageCodec.channelRead(MessageToMessageCodec.java:111) timely_1 | at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) timely_1 | at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:334) timely_1 | at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:327) timely_1 | at io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:293) timely_1 | at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:267) timely_1 | at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) timely_1 | at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:334) timely_1 | at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:327) timely_1 | at io.netty.handler.ssl.SslHandler.unwrap(SslHandler.java:1066) timely_1 | at io.netty.handler.ssl.SslHandler.decode(SslHandler.java:900) timely_1 | at io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:411) timely_1 | at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:248) timely_1 | at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) timely_1 | at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:334) timely_1 | at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:327) timely_1 | at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1279) timely_1 | at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) timely_1 | at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:334) timely_1 | at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:889) timely_1 | at io.netty.channel.epoll.AbstractEpollStreamChannel$EpollStreamUnsafe.epollInReady(AbstractEpollStreamChannel.java:883) timely_1 | at io.netty.channel.epoll.EpollEventLoop.processReady(EpollEventLoop.java:389) timely_1 | at io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:305) timely_1 | at io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:136) timely_1 | at io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:145) timely_1 | at java.lang.Thread.run(Thread.java:745) timely_1 | Caused by: org.apache.accumulo.core.client.impl.AccumuloServerException: Error on server acd1ffcae3a4:43079 timely_1 | at org.apache.accumulo.core.client.impl.TabletServerBatchReaderIterator.doLookup(TabletServerBatchReaderIterator.java:695) timely_1 | at org.apache.accumulo.core.client.impl.TabletServerBatchReaderIterator$QueryTask.run(TabletServerBatchReaderIterator.java:349) timely_1 | at org.apache.htrace.wrappers.TraceRunnable.run(TraceRunnable.java:57) timely_1 | at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) timely_1 | at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) timely_1 | at org.apache.accumulo.fate.util.LoggingRunnable.run(LoggingRunnable.java:35) timely_1 | ... 1 more timely_1 | Caused by: org.apache.thrift.TApplicationException: Internal error processing continueMultiScan timely_1 | at org.apache.thrift.TApplicationException.read(TApplicationException.java:111) timely_1 | at org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:71) timely_1 | at org.apache.accumulo.core.tabletserver.thrift.TabletClientService$Client.recv_continueMultiScan(TabletClientService.java:344) timely_1 | at org.apache.accumulo.core.tabletserver.thrift.TabletClientService$Client.continueMultiScan(TabletClientService.java:330) timely_1 | at org.apache.accumulo.core.client.impl.TabletServerBatchReaderIterator.doLookup(TabletServerBatchReaderIterator.java:663) timely_1 | ... 6 more timely_1 |
setting stop time far in the future produces an OOM. are result buffers pre-allocated?
timely_1 | java.lang.RuntimeException: java.lang.OutOfMemoryError: Java heap space timely_1 | at org.apache.accumulo.core.client.impl.TabletServerBatchReaderIterator.hasNext(TabletServerBatchReaderIterator.java:181) timely_1 | at timely.store.DataStoreImpl.query(DataStoreImpl.java:463) timely_1 | at timely.netty.websocket.timeseries.WSQueryRequestHandler.channelRead0(WSQueryRequestHandler.java:29) timely_1 | at timely.netty.websocket.timeseries.WSQueryRequestHandler.channelRead0(WSQueryRequestHandler.java:17) timely_1 | at io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105) timely_1 | at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) timely_1 | at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:334) timely_1 | at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:327) timely_1 | at io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:108) timely_1 | at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) timely_1 | at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:334) timely_1 | at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:327) timely_1 | at io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:108) timely_1 | at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) timely_1 | at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:334) timely_1 | at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:327) timely_1 | at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:102) timely_1 | at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) timely_1 | at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:334) timely_1 | at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:327) timely_1 | at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:102) timely_1 | at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) timely_1 | at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:334) timely_1 | at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:327) timely_1 | at io.netty.handler.codec.http.websocketx.WebSocketServerProtocolHandler$1.channelRead(WebSocketServerProtocolHandler.java:147) timely_1 | at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) timely_1 | at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:334) timely_1 | at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:327) timely_1 | at io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:266) timely_1 | at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) timely_1 | at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:334) timely_1 | at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:327) timely_1 | at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:102) timely_1 | at io.netty.handler.codec.MessageToMessageCodec.channelRead(MessageToMessageCodec.java:111) timely_1 | at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) timely_1 | at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:334) timely_1 | at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:327) timely_1 | at io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:293) timely_1 | at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:267) timely_1 | at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) timely_1 | at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:334) timely_1 | at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:327) timely_1 | at io.netty.handler.ssl.SslHandler.unwrap(SslHandler.java:1066) timely_1 | at io.netty.handler.ssl.SslHandler.decode(SslHandler.java:900) timely_1 | at io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:411) timely_1 | at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:248) timely_1 | at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) timely_1 | at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:334) timely_1 | at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:327) timely_1 | at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1279) timely_1 | at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) timely_1 | at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:334) timely_1 | at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:889) timely_1 | at io.netty.channel.epoll.AbstractEpollStreamChannel$EpollStreamUnsafe.epollInReady(AbstractEpollStreamChannel.java:883) timely_1 | at io.netty.channel.epoll.EpollEventLoop.processReady(EpollEventLoop.java:389) timely_1 | at io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:305) timely_1 | at io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:136) timely_1 | at io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:145) timely_1 | at java.lang.Thread.run(Thread.java:745) timely_1 | Caused by: java.lang.OutOfMemoryError: Java heap space timely_1 | at org.apache.thrift.protocol.TCompactProtocol.readBinary(TCompactProtocol.java:669) timely_1 | at org.apache.accumulo.core.data.thrift.TKeyValue$TKeyValueStandardScheme.read(TKeyValue.java:439) timely_1 | at org.apache.accumulo.core.data.thrift.TKeyValue$TKeyValueStandardScheme.read(TKeyValue.java:416) timely_1 | at org.apache.accumulo.core.data.thrift.TKeyValue.read(TKeyValue.java:355) timely_1 | at org.apache.accumulo.core.data.thrift.MultiScanResult$MultiScanResultStandardScheme.read(MultiScanResult.java:861) timely_1 | at org.apache.accumulo.core.data.thrift.MultiScanResult$MultiScanResultStandardScheme.read(MultiScanResult.java:840) timely_1 | at org.apache.accumulo.core.data.thrift.MultiScanResult.read(MultiScanResult.java:742) timely_1 | at org.apache.accumulo.core.data.thrift.InitialMultiScan$InitialMultiScanStandardScheme.read(InitialMultiScan.java:428) timely_1 | at org.apache.accumulo.core.data.thrift.InitialMultiScan$InitialMultiScanStandardScheme.read(InitialMultiScan.java:405) timely_1 | at org.apache.accumulo.core.data.thrift.InitialMultiScan.read(InitialMultiScan.java:346) timely_1 | at org.apache.accumulo.core.tabletserver.thrift.TabletClientService$startMultiScan_result$startMultiScan_resultStandardScheme.read(TabletClientService.java:10185) timely_1 | at org.apache.accumulo.core.tabletserver.thrift.TabletClientService$startMultiScan_result$startMultiScan_resultStandardScheme.read(TabletClientService.java:10170) timely_1 | at org.apache.accumulo.core.tabletserver.thrift.TabletClientService$startMultiScan_result.read(TabletClientService.java:10109) timely_1 | at org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:78) timely_1 | at org.apache.accumulo.core.tabletserver.thrift.TabletClientService$Client.recv_startMultiScan(TabletClientService.java:317) timely_1 | at org.apache.accumulo.core.tabletserver.thrift.TabletClientService$Client.startMultiScan(TabletClientService.java:297) timely_1 | at org.apache.accumulo.core.client.impl.TabletServerBatchReaderIterator.doLookup(TabletServerBatchReaderIterator.java:634) timely_1 | at org.apache.accumulo.core.client.impl.TabletServerBatchReaderIterator$QueryTask.run(TabletServerBatchReaderIterator.java:349) timely_1 | at org.apache.htrace.wrappers.TraceRunnable.run(TraceRunnable.java:57) timely_1 | at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) timely_1 | at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) timely_1 | at org.apache.accumulo.fate.util.LoggingRunnable.run(LoggingRunnable.java:35) timely_1 | ... 1 more timely_1 |
-
Summarize data older than some relative time
Recent data could be very granular (seconds). This gets costly in terms of storage as we keep more data online. Provide the ability to summarize (count, min, avg, max) data points in a time series that are older than some configurable time period. The time window for summarization should also be configurable.
API Change -
Create toplevel endpoint that returns version info
Grafana datasource requires a way to test the datasource is working. Currently the OpenTSDB datasource does a suggest query for 'cpu'. My thought is to have an endpoint (/api ?) that returns a 200 OK, and maybe some timely info like version, etc.
My initial thought was to use the api/metrics endpoint but this sometimes takes a bit to return.
-
Variable length age-off for metrics
Currently, all metrics have a uniform age-off established via
timely.metric.age.off.days
. We might want to retain some metrics for a longer time. Opening this issue to discuss how we might implement this.-
separate table sets / namespaces, e.g: metrics written to sevenday.timely and monthly.timely will age off at different rates - use options like
timely.metric.ageoff.seveday.days=7 and timely.table.namespaces=sevenday,monthly
-
store all metrics in the same tables and bake the age-off into the auths with a top level alternation:
( Ao7d | A & B & C )
-
Other ideas?
-
-
Bump decode-uri-component from 0.2.0 to 0.2.2 in /grafana/timely-app
Bumps decode-uri-component from 0.2.0 to 0.2.2.
Release notes
Sourced from decode-uri-component's releases.
v0.2.2
- Prevent overwriting previously decoded tokens 980e0bf
https://github.com/SamVerschueren/decode-uri-component/compare/v0.2.1...v0.2.2
v0.2.1
- Switch to GitHub workflows 76abc93
- Fix issue where decode throws - fixes #6 746ca5d
- Update license (#1) 486d7e2
- Tidelift tasks a650457
- Meta tweaks 66e1c28
https://github.com/SamVerschueren/decode-uri-component/compare/v0.2.0...v0.2.1
Commits
Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting
@dependabot rebase
.
Dependabot commands and options
You can trigger Dependabot actions by commenting on this PR:
@dependabot rebase
will rebase this PR@dependabot recreate
will recreate this PR, overwriting any edits that have been made to it@dependabot merge
will merge this PR after your CI passes on it@dependabot squash and merge
will squash and merge this PR after your CI passes on it@dependabot cancel merge
will cancel a previously requested merge and block automerging@dependabot reopen
will reopen this PR if it is closed@dependabot close
will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually@dependabot ignore this major version
will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)@dependabot ignore this minor version
will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)@dependabot ignore this dependency
will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)@dependabot use these labels
will set the current labels as the default for future PRs for this repo and language@dependabot use these reviewers
will set the current reviewers as the default for future PRs for this repo and language@dependabot use these assignees
will set the current assignees as the default for future PRs for this repo and language@dependabot use this milestone
will set the current milestone as the default for future PRs for this repo and language
You can disable automated security fix PRs for this repo from the Security Alerts page.
-
Bump jackson-databind from 2.10.0 to 2.12.7.1
Bumps jackson-databind from 2.10.0 to 2.12.7.1.
Commits
- See full diff in compare view
Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting
@dependabot rebase
.
Dependabot commands and options
You can trigger Dependabot actions by commenting on this PR:
@dependabot rebase
will rebase this PR@dependabot recreate
will recreate this PR, overwriting any edits that have been made to it@dependabot merge
will merge this PR after your CI passes on it@dependabot squash and merge
will squash and merge this PR after your CI passes on it@dependabot cancel merge
will cancel a previously requested merge and block automerging@dependabot reopen
will reopen this PR if it is closed@dependabot close
will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually@dependabot ignore this major version
will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)@dependabot ignore this minor version
will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)@dependabot ignore this dependency
will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)@dependabot use these labels
will set the current labels as the default for future PRs for this repo and language@dependabot use these reviewers
will set the current reviewers as the default for future PRs for this repo and language@dependabot use these assignees
will set the current assignees as the default for future PRs for this repo and language@dependabot use this milestone
will set the current milestone as the default for future PRs for this repo and language
You can disable automated security fix PRs for this repo from the Security Alerts page.
-
Bump jsoup from 1.14.2 to 1.15.3
Bumps jsoup from 1.14.2 to 1.15.3.
Release notes
Sourced from jsoup's releases.
jsoup 1.15.3
jsoup 1.15.3 is out now, and includes a security fix for potential XSS attacks, along with other bug fixes and improvements, including more descriptive validation error messages.
Details:
jsoup 1.15.2 is out now with a bunch of improvements and bug fixes.
jsoup 1.15.1 is out now with a bunch of improvements and bug fixes.
jsoup 1.14.3
jsoup 1.14.3 is out now, adding native XPath selector support, improved
\<template>
support, and also includes a bunch of bug fixes, improvements, and performance enhancements.See the release announcement for the full changelog.
Changelog
Sourced from jsoup's changelog.
jsoup changelog
Release 1.15.3 [2022-Aug-24]
-
Security: fixed an issue where the jsoup cleaner may incorrectly sanitize crafted XSS attempts if SafeList.preserveRelativeLinks is enabled. https://github.com/jhy/jsoup/security/advisories/GHSA-gp7f-rwcx-9369
-
Improvement: the Cleaner will preserve the source position of cleaned elements, if source tracking is enabled in the original parse.
-
Improvement: the error messages output from Validate are more descriptive. Exceptions are now ValidationExceptions (extending IllegalArgumentException). Stack traces do not include the Validate class, to make it simpler to see where the exception originated. Common validation errors including malformed URLs and empty selector results have more explicit error messages.
-
Bugfix: the DataUtil would incorrectly read from InputStreams that emitted reads less than the requested size. This lead to incorrect results when parsing from chunked server responses, for e.g. jhy/jsoup#1807
-
Build Improvement: added implementation version and related fields to the jar manifest. jhy/jsoup#1809
*** Release 1.15.2 [2022-Jul-04]
-
Improvement: added the ability to track the position (line, column, index) in the original input source from where a given node was parsed. Accessible via Node.sourceRange() and Element.endSourceRange(). jhy/jsoup#1790
-
Improvement: added Element.firstElementChild(), Element.lastElementChild(), Node.firstChild(), Node.lastChild(), as convenient accessors to those child nodes and elements.
-
Improvement: added Element.expectFirst(cssQuery), which is just like Element.selectFirst(), but instead of returning a null if there is no match, will throw an IllegalArgumentException. This is useful if you want to simply abort processing if an expected match is not found.
-
Improvement: when pretty-printing HTML, doctypes are emitted on a newline if there is a preceding comment. jhy/jsoup#1664
-
Improvement: when pretty-printing, trim the leading and trailing spaces of textnodes in block tags when possible, so that they are indented correctly. jhy/jsoup#1798
-
Improvement: in Element#selectXpath(), disable namespace awareness. This makes it possible to always select elements by their simple local name, regardless of whether an xmlns attribute was set. jhy/jsoup#1801
-
Bugfix: when using the readToByteBuffer method, such as in Connection.Response.body(), if the document has not already been parsed and must be read fully, and there is any maximum buffer size being applied, only the default internal buffer size is read. jhy/jsoup#1774
... (truncated)
Commits
c596417
[maven-release-plugin] prepare release jsoup-1.15.3d2d9ac3
Changelog for URL cleaner improvement4ea768d
Strip control characters from URLs when resolving absolute URLs985f1fe
Include help link for malformed URLs6b67d05
Improved Validate error messages653da57
Normalized API doc link5ed84f6
Simplified the Test Server startupc58112a
Set the read size correctly when cappedfa13c80
Added jar manifest default implementation entries.5b19390
Bump maven-resources-plugin from 3.2.0 to 3.3.0 (#1814)- Additional commits viewable in compare view
Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting
@dependabot rebase
.
Dependabot commands and options
You can trigger Dependabot actions by commenting on this PR:
@dependabot rebase
will rebase this PR@dependabot recreate
will recreate this PR, overwriting any edits that have been made to it@dependabot merge
will merge this PR after your CI passes on it@dependabot squash and merge
will squash and merge this PR after your CI passes on it@dependabot cancel merge
will cancel a previously requested merge and block automerging@dependabot reopen
will reopen this PR if it is closed@dependabot close
will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually@dependabot ignore this major version
will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)@dependabot ignore this minor version
will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)@dependabot ignore this dependency
will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)@dependabot use these labels
will set the current labels as the default for future PRs for this repo and language@dependabot use these reviewers
will set the current reviewers as the default for future PRs for this repo and language@dependabot use these assignees
will set the current assignees as the default for future PRs for this repo and language@dependabot use this milestone
will set the current milestone as the default for future PRs for this repo and language
You can disable automated security fix PRs for this repo from the Security Alerts page.
-
-
Bump terser from 4.8.0 to 4.8.1 in /grafana/timely-app
Bumps terser from 4.8.0 to 4.8.1.
Changelog
Sourced from terser's changelog.
v4.8.1 (backport)
- Security fix for RegExps that should not be evaluated (regexp DDOS)
Commits
- See full diff in compare view
Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting
@dependabot rebase
.
Dependabot commands and options
You can trigger Dependabot actions by commenting on this PR:
@dependabot rebase
will rebase this PR@dependabot recreate
will recreate this PR, overwriting any edits that have been made to it@dependabot merge
will merge this PR after your CI passes on it@dependabot squash and merge
will squash and merge this PR after your CI passes on it@dependabot cancel merge
will cancel a previously requested merge and block automerging@dependabot reopen
will reopen this PR if it is closed@dependabot close
will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually@dependabot ignore this major version
will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)@dependabot ignore this minor version
will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)@dependabot ignore this dependency
will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)@dependabot use these labels
will set the current labels as the default for future PRs for this repo and language@dependabot use these reviewers
will set the current reviewers as the default for future PRs for this repo and language@dependabot use these assignees
will set the current assignees as the default for future PRs for this repo and language@dependabot use this milestone
will set the current milestone as the default for future PRs for this repo and language
You can disable automated security fix PRs for this repo from the Security Alerts page.
-
Bump hadoop-common from 2.10.1 to 3.2.3
Bumps hadoop-common from 2.10.1 to 3.2.3.
Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting
@dependabot rebase
.
Dependabot commands and options
You can trigger Dependabot actions by commenting on this PR:
@dependabot rebase
will rebase this PR@dependabot recreate
will recreate this PR, overwriting any edits that have been made to it@dependabot merge
will merge this PR after your CI passes on it@dependabot squash and merge
will squash and merge this PR after your CI passes on it@dependabot cancel merge
will cancel a previously requested merge and block automerging@dependabot reopen
will reopen this PR if it is closed@dependabot close
will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually@dependabot ignore this major version
will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)@dependabot ignore this minor version
will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)@dependabot ignore this dependency
will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)@dependabot use these labels
will set the current labels as the default for future PRs for this repo and language@dependabot use these reviewers
will set the current reviewers as the default for future PRs for this repo and language@dependabot use these assignees
will set the current assignees as the default for future PRs for this repo and language@dependabot use this milestone
will set the current milestone as the default for future PRs for this repo and language
You can disable automated security fix PRs for this repo from the Security Alerts page.
Releases(0.0.5)
-
0.0.5(Aug 11, 2017)
FEATURES
- Made changes and tested with Grafana 4.4.3 (alerting not supported yet)
- Updated provided dashboards
NOTES
- Build modified to not build collectd modules by default, use the
collectd
maven profile at build time
Source code(zip)
-
0.0.4(Feb 2, 2017)
FEATURES
- Added a Timely client library that contains Java code for interacting with the Timely API. Not all operations are finished.
- Added an analytics module for writing Apache Flink jobs
- Exposed Subscription Scanner configuration settings in the Timely configuration
PERFORMANCE
- AgeOff performance is greatly improved with a new iterator
- Added netty-transport-native-epoll as an optional runtime dependency
NOTES
- Timely has not been tested with Grafana 4.x
- The aggregation function (not the downsample aggregration function) in Grafana did not work prior to this release, it now works.
Source code(zip)
-
0.0.3(Oct 27, 2016)
FEATURES
- Configuration file format changed to YAML due to use of Spring Configuration and Spring Boot
- Removed Double Lexicoder in Accumulo Value, requires removal of old data
- Added support for put operations in binary form using Google FlatBuffers
- Updated docker image
- Ageoff (in days) for individual metrics now possible, default value supported. Ageoff iterators are removed and re-applied during startup.
- Timely works with Accumulo 1.7 and 1.8
- Added support for PUT in text and binary form over UDP
PERFORMANCE
- Modified code to use one ageoff iterator instead of a stack of them
- Removed interpolation code and code that handles counters in a special manner. This should be done in the client.
- Moved rate calculation to the tablet server by using an iterator that groups time series together and then applies a filter
NOTES
- This list of jars that are needed on the tablet server has increased and now includes:
- commons-lang3
- commons-collections4
- guava
- timely-client
- timely-server
Additionally, the Accumulo classloader has to be configured for post-delegation given that the version of Guava that we depend on is newer than what Accumulo and Hadoop uses.
- The counter checkbox in the Grafana U/I does nothing. We are looking to fix this up in the next release.
Source code(zip)
-
0.0.2(Jul 27, 2016)
- Created documentation. Available at https://nationalsecurityagency.github.io/timely
- Addition of Timely App for Grafana to support secure login and API calls
- Addition of instructions and configuration to create Docker containers for the different components
- Add new API operation to get the version
- Refactored the APIs to work over multiple protocols, see documentation for details
- Created WebSocket based subscription API
- Resolve tag cardinality internally based on cached metadata, order of tags in query is no longer used
- Added regex support for tag values
- Created utility to generate splits for metrics table
- Fixed some bugs
- Updated Netty to 4.0.38.Final
Source code(zip)
Owner
National Security Agency
MapDB provides concurrent Maps, Sets and Queues backed by disk storage or off-heap-memory. It is a fast and easy to use embedded Java database engine.
MapDB: database engine MapDB combines embedded database engine and Java collections. It is free under Apache 2 license. MapDB is flexible and can be u
The Heroic Time Series Database
DEPRECATION NOTICE This repo is no longer actively maintained. While it should continue to work and there are no major known bugs, we will not be impr
IoTDB (Internet of Things Database) is a data management system for time series data
English | 中文 IoTDB Overview IoTDB (Internet of Things Database) is a data management system for time series data, which can provide users specific ser
Fast scalable time series database
KairosDB is a fast distributed scalable time series database written on top of Cassandra. Documentation Documentation is found here. Frequently Asked
A scalable, distributed Time Series Database.
___ _____ ____ ____ ____ / _ \ _ __ ___ _ _|_ _/ ___|| _ \| __ ) | | | | '_ \ / _ \ '_ \| | \___ \| | | | _ \
An open source SQL database designed to process time series data, faster
English | 简体中文 | العربية QuestDB QuestDB is a high-performance, open-source SQL database for applications in financial services, IoT, machine learning
The Prometheus monitoring system and time series database.
Prometheus Visit prometheus.io for the full documentation, examples and guides. Prometheus, a Cloud Native Computing Foundation project, is a systems
Time series monitoring and alerting platform.
Argus Argus is a time-series monitoring and alerting platform. It consists of discrete services to configure alerts, ingest and transform metrics & ev
Time Series Metrics Engine based on Cassandra
Hawkular Metrics, a storage engine for metric data About Hawkular Metrics is the metric data storage engine part of Hawkular community. It relies on A
The Most Advanced Time Series Platform
Warp 10 Platform Introduction Warp 10 is an Open Source Geo Time Series Platform designed to handle data coming from sensors, monitoring systems and t
Scalable Time Series Data Analytics
Time Series Data Analytics Working with time series is difficult due to the high dimensionality of the data, erroneous or extraneous data, and large d
Apache Druid: a high performance real-time analytics database.
Website | Documentation | Developer Mailing List | User Mailing List | Slack | Twitter | Download Apache Druid Druid is a high performance real-time a
CrateDB is a distributed SQL database that makes it simple to store and analyze massive amounts of machine data in real-time.
About CrateDB is a distributed SQL database that makes it simple to store and analyze massive amounts of machine data in real-time. CrateDB offers the
HurricaneDB a real-time distributed OLAP engine, powered by Apache Pinot
HurricaneDB is a real-time distributed OLAP datastore, built to deliver scalable real-time analytics with low latency. It can ingest from batch data sources (such as Hadoop HDFS, Amazon S3, Azure ADLS, Google Cloud Storage) as well as stream data sources (such as Apache Kafka).
eXist Native XML Database and Application Platform
eXist-db Native XML Database eXist-db is a high-performance open source native XML database—a NoSQL document database and application platform built e
Flyway by Redgate • Database Migrations Made Easy.
Flyway by Redgate Database Migrations Made Easy. Evolve your database schema easily and reliably across all your instances. Simple, focused and powerf
Realm is a mobile database: a replacement for SQLite & ORMs
Realm is a mobile database that runs directly inside phones, tablets or wearables. This repository holds the source code for the Java version of Realm
Transactional schema-less embedded database used by JetBrains YouTrack and JetBrains Hub.
JetBrains Xodus is a transactional schema-less embedded database that is written in Java and Kotlin. It was initially developed for JetBrains YouTrack
Flyway by Redgate • Database Migrations Made Easy.
Flyway by Redgate Database Migrations Made Easy. Evolve your database schema easily and reliably across all your instances. Simple, focused and powerf