The Most Advanced Time Series Platform

Overview

Release Build Status Tweet

Warp 10 Logo

Warp 10 Platform

Introduction

Warp 10 is an Open Source Geo Time Series Platform designed to handle data coming from sensors, monitoring systems and the Internet of Things.

Geo Time Series extend the notion of Time Series by merging the sequence of sensor readings with the sequence of sensor locations.

If your data have no location information, Warp 10 will handle them as regular Time Series.

Warp 10 simplifies sensor data management and analytics.

Features

The Warp 10 Platform provides a rich set of features to simplify your work around sensor data:

  • Warp 10 Storage Engine, our collection and storage layer, a Geo Time Series Database
  • WarpLib, a library dedicated to sensor data analysis with more than 1000 functions and extension capabilities
  • WarpScript, a language specifically designed for analytics of time series data. It is one of the pillars of the analytics layer of the Warp 10 Platform
  • FLoWS, an alternative to WarpScript for users discovering the Warp 10 Platform. It is meant to be easy to learn, look familiar to users of other programming languages and enable time series analysis by leveraging the whole of WarpLib.
  • Plasma and Mobius, streaming engines allowing to cascade the Warp 10 Platform with Complex Event Processing solutions and to build dynamic dashboards
  • Runner, a system for scheduling WarpScript program executions on the server side
  • Sensision, a framework for exposing metrics and pushing them into Warp 10
  • Standalone version running on a Raspberry Pi as well as on a beefy server, with no external dependencies
  • Replication and sharding of standalone instances using the Datalog mechanism
  • Distributed version, based on Hadoop HBase for the most demanding environments
  • Integration with Pig, Spark, Flink, NiFi, Kafka Streams and Storm for batch and streaming analysis.

Getting started

We strongly recommend you to start with the getting started. You will learn the basics and the concepts behind Warp 10 step by step.

Learn more by browsing the documentation.

To test Warp 10 without installing it, try the free sandbox where you can get your hands on in no time.

Help & Community

The team has put lots of efforts into the documentation of the Warp 10 Platform, there are still some areas which may need improving, so we count on you to raise the overall quality.

We understand that discovering all the features of the Warp 10 Platform at once can be intimidating, that’s why you have several options to find answers to your questions:

Our goal is to build a large community of users to move our platform into territories we haven't explored yet and to make Warp 10 and WarpScript the standards for sensor data and the IoT.

Contributing to the Warp 10 Platform

Open source software is built by people like you, who spend their free time creating things the rest of the community can use.

You want to contribute to Warp 10? We encourage you to read the contributing page before.

Commercial Support

Should you need commercial support for your projects, SenX offers support plans which will give you access to the core team developing the platform.

Don't hesitate to contact us at [email protected] for all your enquiries.

Trademarks

Warp 10, WarpScript, WarpFleet, Geo Time Series and SenX are trademarks of SenX S.A.S.

Comments
  • init file overwrites configuration file

    init file overwrites configuration file

    Problem is that init file is overwriting some parameters which we expect to set them in configuration file. e.g. LevelDB home folder. After setting leveldb.home = /some/where/else/data in configuration file, our Warp10 instance failed. After investigation we found out that this parameter is overwritten in init file. But since I'm not sure what is the process and order of running LevelDB, I couldn't decide what should be the general solution.

    In our case we introduced another parameter in init file to have the same value as configuration there.

    If only by having these kind of parameters in configuration file, Warp10 platform can work properly, then they should be not set in init file again, when the configuration file exists.

    opened by mehranshakeri 10
  • Trouble to reboot directories

    Trouble to reboot directories

    We have Warp10 version 2.5 deployed directories (sharded and replicated). They load around 130 millions of series. I noticed that we do get trouble to restart them while series deletion is occuring. In the logs, we got: pool-301-thread-29 ERROR store.Directory - java.lang.InterruptedException at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.reportInterruptAfterWait(AbstractQueuedSynchronizer.java:2014) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2048) at java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) at kafka.consumer.ConsumerIterator.makeNext(ConsumerIterator.scala:63) at kafka.consumer.ConsumerIterator.makeNext(ConsumerIterator.scala:33) at kafka.utils.IteratorTemplate.maybeComputeNext(IteratorTemplate.scala:66) at kafka.utils.IteratorTemplate.hasNext(IteratorTemplate.scala:58) at io.warp10.continuum.store.Directory$DirectoryConsumer.run(Directory.java:1459) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748)

    And in terms of Metrics it's related to a lot of "warp.directory.kafka.shutdowns".

    The rebooted directory do not consume Kafka.

    Our workaround at the moment is to forbid deletes while rebooting directories.

    Let me know if this issue was already fix in new Warp10 version

    opened by aurrelhebert 9
  • Slow APPLY operations when filtering with labels

    Slow APPLY operations when filtering with labels

    With a bucketized dataset of:

    • a list of 1600 GTS named A with only one point inside, having two labels ("host" and "disk").
    • another list of GTS, same conditions, but named B. Every combination of host + disk in labels makes the gts unique (in A and B).

    I want to add each unique combination of gts A to B: [ $a $b [ 'host' 'disk' ] op.add ] APPLY

    It works, but it's very slow on a recent warp: On a old warp 1.2.22-88, it took less than 1000ms. On a recent warp 2.0.2-162, the APPLY alone takes more than 4500ms every time.

    I tested to divide each combination by myself like this:

    $a [ 'host' 'disk' ] PARTITION 'pa' STORE
    $b [ 'host' 'disk' ] PARTITION
    [] SWAP
    <%
       'series' STORE
       'labels' STORE
       [ 
          $series
          $pa $labels GET
          [] 
          op.add 
       ] APPLY
       APPEND
    %>
    FOREACH
    

    ... and it took ~100ms.

    I am a bit worried here about the low performance with the APPLY function. I am searching about what could be responsible for it. What would be the best way to perform this operation?

    PS: All time measurements has been done in the WarpScript with NOW time subtractions around the APPLY.

    opened by GillesBIANNIC 9
  • boolean, bitwise and comparison operators on GTS

    boolean, bitwise and comparison operators on GTS

    Fix the fate of boolean GTS operation in Warp 10 2.0 : allow not, and, or, xor on boolean GTS. Tried to optimize NOT with flip operation, but I didn't test performances.

    I choose to implement and or xor bitwise operators, not condition ones... I can change.

    opened by pi-r-p 9
  • Authentication: introduce other methods than warp10 token

    Authentication: introduce other methods than warp10 token

    recently, we discover the need of being able to delegate a subpart of the authorisation of a token, like we do in macaroon.

    I'm not confident about W10Token evolution without breaking things, and for a various reason get proper macaroons can be a good thing. But macaroons are way longer than w10tk. That's why I think a warp10 cluster will profit of multiple authentication methods.

    In the warp script we will change nothing, get token as usual but prefix with an alternative when it's not a warp10 token (ie "macaroon:", "userpassword:"...). This way we can provide a various way to connect to the w10 cluster.

    What do you think?

    Do we want to be able to build auth extensions also outside of the w10 codebase? With a dynamic classloader plugin system? Or built in in the core?

    Where do I have to plug this?

    opened by waxzce 9
  • Add possibility to have the filtered elements during a FILTER

    Add possibility to have the filtered elements during a FILTER

    Here's an example:

    NEWGTS "GTS1" RENAME 
    { 'label0' '42' 'env' 'prod' } RELABEL
    
    NEWGTS "GTS2" RENAME 
    { 'label0' '43' } RELABEL
    
    2 ->LIST
    
    [ SWAP [] { 'env' 'prod' } filter.bylabels ] FILTER
    

    At the end of the script, I have the right result, but in this case, having the the filtered elements during FILTER could be interesting.

    Is it possible to have (for example) a new type of FILTER framework , which will push back 2 lists in the stack:

    • the remaining elements after filtering
    • the filtered elements
    opened by PierreZ 9
  • Syntax for adding other HTTP methods to URLFETCH

    Syntax for adding other HTTP methods to URLFETCH

    Currently the URLFETCH fonction only support GET method. The signatures are:

    url:STRING headers:MAP<STRING> URLFETCH
    urls:LIST<STRING> headers:MAP<STRING> URLFETCH
    

    with headers being optional.

    To support additional methods, I suggest:

    request:MAP<STRING> headers:MAP<STRING> URLFETCH
    requests:LIST<MAP<STRING>> headers:MAP<STRING> URLFETCH
    

    where request shall have the fields 'url', 'method' and optionally 'message', and headers are still being optional.

    Is that signature good ?

    opened by randomboolean 8
  • Directory : provide a last_seen hint

    Directory : provide a last_seen hint

    To know if a serie is still active or not, you can fetch the last available datapoint and check its date matching your criteria.

    If data auto eviction is enabled, then you won't even be able to lookup this last datapoint.

    It would be very useful to have a last_seen hint per GTS that give the timestamp (second is enough) corresponding to the last update on a GTS.

    There is two possibilities to implements this :

    • Ingress
    • Store

    Store component is CPU bound and uses less memory than Ingress, since it maintains the meta caching, so in case of distributed deployment, it would be better implement this on the Store side.

    It would maintain a structure, like a concurrent hash map, setting the last timestamp as a value for a key corresponding to the TS ID.

    Data could be sampled, and produced in best effort to either directory (standalone) or a dedicated Kafka topic.

    The struct IndexSpec could have the last_seen field so that we can at the end perform a LASTSEEN with an optional parameter :

    [ 'RTOKEN' 'class_pattern' { labels } ] LASTSEEN

    result would be :

    [{
    		"c": "class",
    		"l": {
    			"label0": "value0",
    			"label1": "value1"
    		},
    		"a": {
    			"attr0": "value0"
    		},
    		"v": [
    			[0, last_seen],
    		]
    	}
    ]
    

    This way, we can leverage all frameworks to manipulate the result (FILTER, ...) and easily get series older than n days.

    This would be very helpful to manage the Directory.

    opened by StevenLeRoux 8
  • Generic error status codes

    Generic error status codes

    Many errors have the same return code :

    • Exceeded MADS
    • Invalid Token
    • Exec Limit raised
    • Exceeded DDP

    It would be nice to have a fine grained error status codes so that we can better catch the source of the issue.

    opened by StevenLeRoux 8
  • RUNNERNEXT function on standalone

    RUNNERNEXT function on standalone

    This new function allow to reschedule a runner from the runner itself.

    To test, generate a token with this capability:

      'attributes' {
        '.cap:runner.reschedule.min.period' '5000'
      }
    

    Use it with CAPADD inside your runner. From the runner, you can schedule the next iteration to "start of the script" + the period you want: 10 s RUNNERNEXT

    It will fail if the requested period is less than the one defined in capability.

    The function have no effect on a standalone with remote runner execution, or on a distributed instance.

    opened by pi-r-p 7
  • leveldb.maxopenfiles is not respected

    leveldb.maxopenfiles is not respected

    Hi, When starting Warp 10, I get the following error:

      ___       __                           ____________
      __ |     / /_____ _______________      __<  /_  __ \
      __ | /| / /_  __ `/_  ___/__  __ \     __  /_  / / /
      __ |/ |/ / / /_/ /_  /   __  /_/ /     _  / / /_/ /
      ____/|__/  \__,_/ /_/    _  .___/      /_/  \____/
                               /_/
    
      Revision 2.9.0
    
    2022-01-19 18:54:11.714:INFO:iwsjoejs.AbstractConnector:Started [email protected]:37147
    2022-01-19T18:54:11,716 main INFO  store.Constants - ########[ Initialized with 1000 time units per millisecond ]###
    #####
    2022-01-19T18:54:11,725 main WARN  script.WarpFleetMacroRepository - No validator macro, default macro will reject a
    ll URLs.
    REPORT secret not set, using '1c77f05b-8aee-4328-a503-6d2363fefcf0'.
    Exception in thread "main" java.lang.RuntimeException: org.fusesource.leveldbjni.internal.NativeDB$DBException: IO e
    rror: /srv/warp10/leveldb/14939849.sst: Trop de fichiers ouverts
            at org.fusesource.leveldbjni.internal.JniDBIterator.seek(JniDBIterator.java:68)
            at io.warp10.standalone.WarpDB$WarpIterator.seek(WarpDB.java:133)
            at io.warp10.standalone.StandaloneDirectoryClient.<init>(StandaloneDirectoryClient.java:193)
            at io.warp10.standalone.Warp.main(Warp.java:379)
    Caused by: org.fusesource.leveldbjni.internal.NativeDB$DBException: IO error: /srv/warp10/leveldb/14939849.sst: Trop
     de fichiers ouverts
            at org.fusesource.leveldbjni.internal.NativeDB.checkStatus(NativeDB.java:200)
            at org.fusesource.leveldbjni.internal.NativeIterator.checkStatus(NativeIterator.java:121)
            at org.fusesource.leveldbjni.internal.NativeIterator.seek(NativeIterator.java:151)
            at org.fusesource.leveldbjni.internal.NativeIterator.seek(NativeIterator.java:145)
            at org.fusesource.leveldbjni.internal.NativeIterator.seek(NativeIterator.java:138)
            at org.fusesource.leveldbjni.internal.JniDBIterator.seek(JniDBIterator.java:63)
            ... 3 more
    2022-01-19T18:54:52,999 main INFO  store.Constants - ########[ Initialized with 1000 time units per millisecond ]########
    2022-01-19T18:54:53,009 main WARN  script.WarpFleetMacroRepository - No validator macro, default macro will reject all URLs.
    2022-01-19 18:54:54.427:INFO:iwsjoejs.Server:jetty-8.y.z-SNAPSHOT
    

    It says that I have too many files open. However I set my leveldb.maxopenfiles (nothing, 1k, 100k…), Warp 10 opens 524287 files and then dies (found in a pretty dirty way: while true; do for pid in [0-9]*; do ls /proc/$pid/fd/ | wc -l; done | grep ......; done). My soft ulimit is 1024 and my hard ulimit is 1048576 (twice the number of files after which Warp 10 crashes). What can I do to properly start Warp 10?

    Thanks in advance

    opened by bensmrs 7
  • add SLEEP function and capability

    add SLEEP function and capability

    When pushing data to external services (using HTTP or WEBCALL), we might need to respect a minimum period between requests. Current solution to do so is an infinite loop bounded by TIMEBOX, quite CPU intensive.

    Adds sleep.maxtime capability.

    opened by pi-r-p 0
  • check if MAP ticks input are sorted instead of sorting them

    check if MAP ticks input are sorted instead of sorting them

    In case of a concurrent execution of MAP, sorting the outputTicks will lead to a concurrent modification exception.

    So, I just check if the input is correctly sorted instead of always calling Collections.sort()

    opened by pi-r-p 2
  • Error in bootstrap with TOKENGEN

    Error in bootstrap with TOKENGEN

    As specified in the documentation: "Secret configured via token.secret. This parameter should not be specified when calling TOKENGEN from Worf." But the warp10-standalone.sh script in version 2.10.1 is still using this possibility, So as a result, if we define a token.secret in the configuration file (which is our case), the bootstrap fails with a stacktrace. (and the initialtokens file is not generated) This is probably not an issue but the code could be easily updated (and the default warp10-tokengen.mc2 with the {{secret}}template variable ...

    cheers

    opened by omerlin 0
Releases(2.11.1)
Owner
SenX
Home of Warp 10™, The Most Advanced Time Series Platform
SenX
Time Series Metrics Engine based on Cassandra

Hawkular Metrics, a storage engine for metric data About Hawkular Metrics is the metric data storage engine part of Hawkular community. It relies on A

Hawkular 230 Dec 9, 2022
The Heroic Time Series Database

DEPRECATION NOTICE This repo is no longer actively maintained. While it should continue to work and there are no major known bugs, we will not be impr

Spotify 842 Dec 20, 2022
IoTDB (Internet of Things Database) is a data management system for time series data

English | 中文 IoTDB Overview IoTDB (Internet of Things Database) is a data management system for time series data, which can provide users specific ser

The Apache Software Foundation 3k Jan 1, 2023
Fast scalable time series database

KairosDB is a fast distributed scalable time series database written on top of Cassandra. Documentation Documentation is found here. Frequently Asked

null 1.7k Dec 17, 2022
A scalable, distributed Time Series Database.

___ _____ ____ ____ ____ / _ \ _ __ ___ _ _|_ _/ ___|| _ \| __ ) | | | | '_ \ / _ \ '_ \| | \___ \| | | | _ \

OpenTSDB 4.8k Dec 26, 2022
An open source SQL database designed to process time series data, faster

English | 简体中文 | العربية QuestDB QuestDB is a high-performance, open-source SQL database for applications in financial services, IoT, machine learning

QuestDB 9.9k Jan 1, 2023
Accumulo backed time series database

Timely is a time series database application that provides secure access to time series data. Timely is written in Java and designed to work with Apac

National Security Agency 367 Oct 11, 2022
Scalable Time Series Data Analytics

Time Series Data Analytics Working with time series is difficult due to the high dimensionality of the data, erroneous or extraneous data, and large d

Patrick Schäfer 286 Dec 7, 2022
The Prometheus monitoring system and time series database.

Prometheus Visit prometheus.io for the full documentation, examples and guides. Prometheus, a Cloud Native Computing Foundation project, is a systems

Prometheus 46.3k Jan 10, 2023
Apache Druid: a high performance real-time analytics database.

Website | Documentation | Developer Mailing List | User Mailing List | Slack | Twitter | Download Apache Druid Druid is a high performance real-time a

The Apache Software Foundation 12.3k Jan 1, 2023
CrateDB is a distributed SQL database that makes it simple to store and analyze massive amounts of machine data in real-time.

About CrateDB is a distributed SQL database that makes it simple to store and analyze massive amounts of machine data in real-time. CrateDB offers the

Crate.io 3.6k Jan 2, 2023
HurricaneDB a real-time distributed OLAP engine, powered by Apache Pinot

HurricaneDB is a real-time distributed OLAP datastore, built to deliver scalable real-time analytics with low latency. It can ingest from batch data sources (such as Hadoop HDFS, Amazon S3, Azure ADLS, Google Cloud Storage) as well as stream data sources (such as Apache Kafka).

GuinsooLab 4 Dec 28, 2022
eXist Native XML Database and Application Platform

eXist-db Native XML Database eXist-db is a high-performance open source native XML database—a NoSQL document database and application platform built e

eXist-db.org 363 Dec 30, 2022
Facsimile - Copy Your Most Used Text to Clipboard Easily with Facsimile!. It Helps You to Store You Most Used Text as a Key, Value Pair and Copy it to Clipboard with a Shortcut.

Facsimile An exact copy of Your Information ! Report Bug · Request Feature Table of Contents About The Project Built With Getting Started Installation

Sri lakshmi kanthan P 1 Sep 12, 2022
Time series monitoring and alerting platform.

Argus Argus is a time-series monitoring and alerting platform. It consists of discrete services to configure alerts, ingest and transform metrics & ev

Salesforce 495 Dec 1, 2022
PostgreSQL is the world's most advanced open source database. Also, PostgreSQL is suitable for Event Sourcing. This repository provides a sample of event sourced system that uses PostgreSQL as event store.

Event Sourcing with PostgreSQL Introduction Example Domain Event Sourcing and CQRS 101 State-Oriented Persistence Event Sourcing CQRS Advantages of CQ

Evgeniy Khyst 146 Dec 20, 2022
🕊️ The world's most advanced open source instant messaging engine for 100K~10M concurrent users https://turms-im.github.io/docs

简体中文 What is Turms Turms is the most advanced open-source instant messaging engine for 100K~10M concurrent users in the world. Please refer to Turms D

null 1.2k Dec 27, 2022
XR3Player - The MOST ADVANCED JavaFX Media Player

Support me joining PI Network app with invitation code AlexKent I am in search for developers to keep on where i left XR3Player :) XR3Player ( Downloa

GOXR3PLUS STUDIO 613 Dec 17, 2022
Java-Programs---For-Practice is one of the Java Programming Practice Series By Shaikh Minhaj ( minhaj-313 ). This Series will help you to level up your Programming Skills. This Java Programs are very much helpful for Beginners.

Java-Programs---For-Practice is one of the Java Programming Practice Series By Shaikh Minhaj ( minhaj-313 ). This Series will help you to level up your Programming Skills. This Java Programs are very much helpful for Beginners. If You Have any doubt or query you can ask me here or you can also ask me on My LinkedIn Profile

Shaikh Minhaj 3 Nov 8, 2022