Distributed ID Generate Service

Overview

Leaf

There are no two identical leaves in the world.

​ — Leibnitz

中文文档 | English Document

Introduction

Leaf refers to some common ID generation schemes in the industry, including redis, UUID, snowflake, etc. Each of the above approaches has its own problems, so we decided to implement a set of distributed ID generation services to meet the requirements. At present, Leaf covers Meituan review company's internal finance, catering, takeaway, hotel travel, cat's eye movie and many other business lines. On the basis of 4C8G VM, through the company RPC method, QPS pressure test results are nearly 5w/s, TP999 1ms.

You can use it to encapsulate a distributed unique id distribution center in a service-oriented SOA architecture as the id distribution provider for all applications

Quick Start

Leaf Server

Leaf provides an HTTP service based on spring boot to get the id

run Leaf Server

build
git clone [email protected]:Meituan-Dianping/Leaf.git
cd leaf
mvn clean install -DskipTests
cd leaf-server
run
maven
mvn spring-boot:run

or

shell command
sh deploy/run.sh
test
#segment
curl http://localhost:8080/api/segment/get/leaf-segment-test
#snowflake
curl http://localhost:8080/api/snowflake/get/test

Configuration

Leaf provides two ways to generate ids (segment mode and snowflake mode), which you can turn on at the same time or specify one way to turn on (both are off by default).

Leaf Server configuration is in the leaf-server/src/main/resources/leaf.properties

configuration meaning default
leaf.name leaf service name
leaf.segment.enable whether segment mode is enabled false
leaf.jdbc.url mysql url
leaf.jdbc.username mysql username
leaf.jdbc.password mysql password
leaf.snowflake.enable whether snowflake mode is enabled false
leaf.snowflake.zk.address zk address under snowflake mode
leaf.snowflake.port service registration port under snowflake mode

Segment mode

In order to use segment mode, you need to create DB table first, and configure leaf.jdbc.url, leaf.jdbc.username, leaf.jdbc.password

If you do not want use it, just configure leaf.segment.enable=false to disable it.

CREATE DATABASE leaf
CREATE TABLE `leaf_alloc` (
  `biz_tag` varchar(128)  NOT NULL DEFAULT '', -- your biz unique name
  `max_id` bigint(20) NOT NULL DEFAULT '1',
  `step` int(11) NOT NULL,
  `description` varchar(256)  DEFAULT NULL,
  `update_time` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP,
  PRIMARY KEY (`biz_tag`)
) ENGINE=InnoDB;

insert into leaf_alloc(biz_tag, max_id, step, description) values('leaf-segment-test', 1, 2000, 'Test leaf Segment Mode Get Id')

Snowflake mode

The algorithm is taken from twitter's open-source snowflake algorithm.

If you do not want to use it, just configure leaf.snowflake.enable=false to disable it.

Configure the zookeeper address

leaf.snowflake.zk.address=${address}
leaf.snowflake.enable=true
leaf.snowflake.port=${port}

configure leaf.snowflake.zk.address in the leaf.properties, and configure the leaf service listen port leaf.snowflake.port.

monitor page

segment mode: http://localhost:8080/cache

Leaf Core

Of course, in order to pursue higher performance, you need to deploy the Leaf service through RPC Server, which only needs to introduce the leaf-core package and encapsulate the API that generates the ID into the specified RPC framework.

Attention

Note that leaf's current IP acquisition logic in the case of snowflake mode takes the first network card IP directly (especially for services that change IP) to avoid wasting the workId

Comments
  • 如何解决segment浪费严重的问题?

    如何解决segment浪费严重的问题?

    有这样一个场景:leaf服务运行一段时间,流量较大,此时的步长很长(假如一直维持在MAX_STEP,即10万)。当某时刻current segment的数据消费10%,系统会异步准备next segment。当current segment消费了10.01%数据时,此时leaf服务挂了。然后又重启服务。你会发现segment浪费很严重。如何去解决这个问题?

    opened by zacharyzhu096 14
  • 您好,这段代码难道不会NPE吗

    您好,这段代码难道不会NPE吗

    if (cache.containsKey(key)) {
                SegmentBuffer buffer = cache.get(key);
                if (!buffer.isInitOk()) {
                    synchronized (buffer) {
                        if (!buffer.isInitOk()) {
                            try {
                                updateSegmentFromDb(key, buffer.getCurrent());
                                logger.info("Init buffer. Update leafkey {} {} from db", key, buffer.getCurrent());
                                buffer.setInitOk(true);
                            } catch (Exception e) {
                                logger.warn("Init buffer {} exception", buffer.getCurrent(), e);
                            }
                        }
                    }
                }
                return getIdFromSegmentBuffer(cache.get(key));
            }
    

    if containsKey then get

    opened by hymcn 6
  • 时间差过大,超过41位溢出,导致生成的id负数的问题

    时间差过大,超过41位溢出,导致生成的id负数的问题

    因为Leaf框架是沿用snowflake的位数分配, image

    最大41位时间差+10位的workID+12位序列化,但是由于snowflake是强制要求第一位为符号位0,否则生成的id转换为十进制后会是复试,但是Leaf项目中没有对时间差进行校验,当时间戳过大或者自定义的twepoch设置不当过小,会导致计算得到的时间差过大,转化为2进制后超过41位,且第一位为1,会导致生成的long类型的id为负数,例如当timestamp = twepoch+2199023255552L时, 此时在生成id时,timestamp - twepoch会等于2199023255552,2199023255552转换为二进制后是1+41个0,此时生成的id由于符号位是1,id会是负数-9223372036854775793

     long id = ((timestamp - twepoch) << timestampLeftShift) | (workerId << workerIdShift) | sequence;
    

    虽然这是很特殊的一种情况,但是我看百度uid-generator也考虑到了这种情况,代码中有对这种情况的判断,所以我觉得还是Leaf也有必要修复一下,所以我提了这个PR https://github.com/Meituan-Dianping/Leaf/pull/105 对这个问题进行了修复,希望美团的朋友们有空可以看一看。

    另外美团对于Leaf的介绍文章里面对于snowflake算法介绍用的配图存在错误,序列号的位数分配应该是12位,而不是10位,不然总位数不是63位。 image

    opened by NotFound9 4
  • 当步长设置过小的时候,程序会报错

    当步长设置过小的时候,程序会报错

    当步长设置为50。然后运行SegmentIDGenImplTest中的测试方法,循环5000次,就会报错: Both two segments in SegmentBuffer{key='leaf-segment-test', segments=[Segment(value:53,max:51,step:50), Segment(value:103,max:101,step:50)], currentPos=1, nextReady=false, initOk=true, threadRunning=true, step=50, minStep=50, updateTimestamp=1556434867190} are not ready!

    从错误信息来看,threadRunning=true表示当前buffer正在填充数据,但是获取nextID的逻辑并没有等待数据填充完,就尝试获取导致报错

    opened by fliu721 4
  • 根据文档运行失败

    根据文档运行失败

    [WARNING] java.lang.reflect.InvocationTargetException at sun.reflect.NativeMethodAccessorImpl.invoke0 (Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke (NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke (DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke (Method.java:498) at org.springframework.boot.maven.AbstractRunMojo$LaunchRunner.run (AbstractRunMojo.java:528) at java.lang.Thread.run (Thread.java:745) Caused by: org.springframework.boot.context.embedded.tomcat.ConnectorStartFailedException: Connector configured to listen on port 8080 failed to start at org.springframework.boot.context.embedded.tomcat.TomcatEmbeddedServletContainer.checkThatConnectorsHaveStarted (TomcatEmbeddedServletContainer.java:237) at org.springframework.boot.context.embedded.tomcat.TomcatEmbeddedServletContainer.start (TomcatEmbeddedServletContainer.java:213) at org.springframework.boot.context.embedded.EmbeddedWebApplicationContext.startEmbeddedServletContainer (EmbeddedWebApplicationContext.java:308) at org.springframework.boot.context.embedded.EmbeddedWebApplicationContext.finishRefresh (EmbeddedWebApplicationContext.java:147) at org.springframework.context.support.AbstractApplicationContext.refresh (AbstractApplicationContext.java:546) at org.springframework.boot.context.embedded.EmbeddedWebApplicationContext.refresh (EmbeddedWebApplicationContext.java:124) at org.springframework.boot.SpringApplication.refresh (SpringApplication.java:693) at org.springframework.boot.SpringApplication.refreshContext (SpringApplication.java:360) at org.springframework.boot.SpringApplication.run (SpringApplication.java:303) at org.springframework.boot.SpringApplication.run (SpringApplication.java:1118) at org.springframework.boot.SpringApplication.run (SpringApplication.java:1107) at com.sankuai.inf.leaf.server.LeafServerApplication.main (LeafServerApplication.java:10) at sun.reflect.NativeMethodAccessorImpl.invoke0 (Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke (NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke (DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke (Method.java:498) at org.springframework.boot.maven.AbstractRunMojo$LaunchRunner.run (AbstractRunMojo.java:528) at java.lang.Thread.run (Thread.java:745) [INFO] ------------------------------------------------------------------------ [INFO] BUILD FAILURE [INFO] ------------------------------------------------------------------------ [INFO] Total time: 51.736 s [INFO] Finished at: 2019-03-08T10:23:55+08:00 [INFO] ------------------------------------------------------------------------ [ERROR] Failed to execute goal org.springframework.boot:spring-boot-maven-plugin:1.5.18.RELEASE:run (default-cli) on project leaf-server: An exception occurred while running. null: InvocationTargetException: Connector configured to listen on port 8080 failed to start -> [Help 1] [ERROR] [ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch. [ERROR] Re-run Maven using the -X switch to enable full debug logging. [ERROR] [ERROR] For more information about the errors and possible solutions, please read the following articles: [ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoExecutionException

    opened by dhailing 4
  • 关于snowflake模式下需要输入一个字符串的疑问

    关于snowflake模式下需要输入一个字符串的疑问

    snowflake模式下获取id需要输入一个毫无用处的字符串,给我的感觉怪怪的。这里这么处理感觉没有意义的,完全可以写两个接口,匹配不同的模式。是不是因为开源版本为了方便?想问一下大佬。 我这边在实现rpc调用方式,所以觉得可以新建一个模块一些api放入这个模块就可以啦,新建两个接口,snowflake模式获取就不需要输入没有意义的一个字符串了。

    opened by Issocala 3
  • leaf_name携带 - 导致获取workerId报错

    leaf_name携带 - 导致获取workerId报错

    https://github.com/Meituan-Dianping/Leaf/blob/76301dcb45bcbcc594f55ece6b1c0a01886edf84/leaf-core/src/main/java/com/sankuai/inf/leaf/snowflake/SnowflakeZookeeperHolder.java#L85-L92

    workerID = Integer.parseInt(nodeKey[1]); 获取到的为leaf_name的值 改为workerID = Integer.parseInt(nodeKey[nodeKey.length - 1]);

    opened by wuweishuo 3
  • 关于snowflake的启动流程中leaf_temporary临时节点相关疑问

    关于snowflake的启动流程中leaf_temporary临时节点相关疑问

    官方博客介绍snowflake模式下整个启动流程如下:

    服务启动时首先检查自己是否写过ZooKeeper leaf_forever节点:

    1. 若写过,则用自身系统时间与leaf_forever/${self}节点记录时间做比较,若小于leaf_forever/${self}时间则认为机器时间发生了大步长回拨,服务启动失败并报警。
    2. 若未写过,证明是新服务节点,直接创建持久节点leaf_forever/${self}并写入自身系统时间,接下来综合对比其余Leaf节点的系统时间来判断自身系统时间是否准确,具体做法是取leaf_temporary下的所有临时节点(所有运行中的Leaf-snowflake节点)的服务IP:Port,然后通过RPC请求得到所有节点的系统时间,计算sum(time)/nodeSize。
    3. 若abs( 系统时间-sum(time)/nodeSize ) < 阈值,认为当前系统时间准确,正常启动服务,同时写临时节点leaf_temporary/${self} 维持租约。
    4. 否则认为本机系统时间发生大步长偏移,启动失败并报警。
    5. 每隔一段时间(3s)上报自身系统时间写入leaf_forever/${self}。

    请问有关第2步、第3步临时节点leaf_temporary的相关逻辑,源码中好像没有,是否是版本更新摒弃了这一部分逻辑。

    opened by MrSorrow 3
  • [咨询] Snowflake Id 序列不区分 key 的设计考虑

    [咨询] Snowflake Id 序列不区分 key 的设计考虑

    Hi Leaf Community,

    初来乍到,多多包涵,有个问题想请教下。

    当前 master (revision: 86a6441d263497b9f9ee321de13422b9c63f0c06) 中,SnowflakeIDGenImpl#get 方法声明如下:

    public synchronized Result get(String key) 
    

    从源码中得出两个实现细节:

    1. SnowflakeIDGenImpl#get 方法是加了对象级别的可重入锁
    2. 参数 keyget方法的实现里没用到
    3. SnowFlakeIdGenImplSnowfalkeService 是单例,SnowflakeSerivce 也是单例
    4. 由 1 2 3 可以知道,对于某个 Leaf-Server 实例,所有到 /api/snowflake/get/{key} 的请求,共享同一个 SnowflakeIdGenImpl,即属于同一个 ID 序列,可能存在两个风险
      1. 不同的 key 共享同一个 ID 序列,导致 sequence 用的比较快,可能有在一个毫秒内耗完 sequence 的风险
      2. SnowflakeIDGenImpl#get 方法上做同步,会降低并发性能,这里可能会成为瓶颈

    基于上述理解,有几个问题想请教下:

    1. 对于 leaf server snowflake,有没有参考数据,单机到多少 qps 是个比较安全的值
    2. SnowflakeIDGenImpl#get 方法上做同步,有什么设计上的考虑,如果改成在 key 级别做同步会有什么坑,潜在的问题可能是 key 如何维护
    opened by xiaozongyang 2
  • 让Segment模式生成的ID不连续,保证信息安全性

    让Segment模式生成的ID不连续,保证信息安全性

    我提了一个PR#118,让让Segment模式生成的ID不连续,保证信息安全性。

    Leaf的官方介绍文章中曾经提到分布式ID应该保证信息安全性,

    4.信息安全:如果ID是连续的,恶意用户的扒取工作就非常容易做了,直接按照顺序下载指定URL即可;如果是订单号就更危险了,竞对可以直接知道我们一天的单量。所以在一些应用场景下,会需要ID无规则、不规则。

    image image

    让ID不连续,否则容易被恶意爬取和泄露单量,目前使用snowflake模式生成ID时是有取随机数相关的逻辑来保证ID不连续的,但是如果是Leaf-segment数据库方案来生成ID,每次从数据库取完ID,都是存储在内存中,发放给业务项目,所以运行期间,使用的ID都是连续的,竞争对手很容易通过在两天中午12点分别下单,通过订单id号相减就能大致计算出公司一天的订单量。如果我们在每次从数据库取一个分段的ID时,计算一个随机数,丢弃掉随机数数量的ID,这样就可以保证ID不连续,保证Leaf-segment数据库方案来保证生成ID的安全性。

    源代码:SegmentIDGenImpl中updateSegmentFromDb()方法中计算当前ID分段最小值

    long value = leafAlloc.getMaxId() - buffer.getStep();
    

    修改后的代码

     int bound = buffer.getStep()/20;//随机值最大取值为5%的Step,主要考虑到buffer中id使用量超过10%就会触发另一个buffer的更新,所以我们取的随机值的最大值小于10%会好一些,不然一开始就触发更新了。
    int randomValue = RANDOM.nextInt(bound);//生成随机值
    long value = leafAlloc.getMaxId() - buffer.getStep() + randomValue;//丢弃随机值数量的id,让id不连续,保证id安全性
    
    wontfix 
    opened by NotFound9 2
  • leaf多台机器部署时候,cache不能共享,美团内部有什么好的解决方案吗?cache存到redis可行不?

    leaf多台机器部署时候,cache不能共享,美团内部有什么好的解决方案吗?cache存到redis可行不?

    现象描述: 为防止单台leaf-server出现故障。将leaf-server 多机器注册到注册中心后,例如spring cloud的eureka注册中心,然后对外提供服务。会出现id跳动生成,不连续。 例如在step为2000的情况下,id从1开始,如果启动三台机器:A机器1-2000 B机器 2001-4000 C机器4001-6000 则ID生成会呈现 1 2001 4001 2 2002 4002 3 2003 4003的现象

    能不能把cache存放在redis里呢?效率会有影响吗?还是说做HA,保证一台挂了,另外一台马上起来。永远只要一台提供服务?

    opened by yangchangjiang 2
  • Bump spring-beans from 4.3.18.RELEASE to 5.2.20.RELEASE

    Bump spring-beans from 4.3.18.RELEASE to 5.2.20.RELEASE

    Bumps spring-beans from 4.3.18.RELEASE to 5.2.20.RELEASE.

    Release notes

    Sourced from spring-beans's releases.

    v5.2.20.RELEASE

    :star: New Features

    • Restrict access to property paths on Class references #28262
    • Improve diagnostics in SpEL for large array creation #28257

    v5.2.19.RELEASE

    :star: New Features

    • Declare serialVersionUID on DefaultAopProxyFactory #27785
    • Use ByteArrayDecoder in DefaultClientResponse::createException #27667

    :lady_beetle: Bug Fixes

    • ProxyFactoryBean getObject called before setInterceptorNames, silently creating an invalid proxy [SPR-7582] #27817
    • Possible NPE in Spring MVC LogFormatUtils #27783
    • UndertowHeadersAdapter's remove() method violates Map contract #27593
    • Fix assertion failure messages in DefaultDataBuffer.checkIndex() #27577

    :notebook_with_decorative_cover: Documentation

    • Lazy annotation throws exception if non-required bean does not exist #27660
    • Incorrect Javadoc in [NamedParameter]JdbcOperations.queryForObject methods regarding exceptions #27581
    • DefaultResponseErrorHandler update javadoc comment #27571

    :hammer: Dependency Upgrades

    • Upgrade to Reactor Dysprosium-SR25 #27635
    • Upgrade to Log4j2 2.16.0 #27825

    v5.2.18.RELEASE

    :star: New Features

    • Enhance DefaultResponseErrorHandler to allow logging complete error response body #27558
    • DefaultMessageListenerContainer does not log an error/warning when consumer tasks have been rejected #27457

    :lady_beetle: Bug Fixes

    • Performance impact of con.getContentLengthLong() in AbstractFileResolvingResource.isReadable() downloading huge jars to check component length #27549
    • Performance impact of ResourceUrlEncodingFilter on HttpServletResponse#encodeURL #27548
    • Avoid duplicate JCacheOperationSource bean registration in #27547
    • Non-escaped closing curly brace in RegEx results in initialization error on Android #27502
    • Proxy generation with Java 17 fails with "Cannot invoke "Object.getClass()" because "cause" is null" #27498
    • ConcurrentReferenceHashMap's entrySet violates the Map contract #27455

    :hammer: Dependency Upgrades

    • Upgrade to Reactor Dysprosium-SR24 #27526

    v5.2.17.RELEASE

    ... (truncated)

    Commits
    • cfa701b Release v5.2.20.RELEASE
    • 996f701 Refine PropertyDescriptor filtering
    • 90cfde9 Improve diagnostics in SpEL for large array creation
    • 94f52bc Upgrade to Artifactory Resource 0.0.17
    • d4478ba Upgrade Java versions in CI image
    • 136e6db Upgrade Ubuntu version in CI images
    • 8f1f683 Upgrade Java versions in CI image
    • ce2367a Upgrade to Log4j2 2.17.1
    • acf7823 Next development version (v5.2.20.BUILD-SNAPSHOT)
    • 1a03ffe Upgrade to Log4j2 2.16.0
    • Additional commits viewable in compare view

    Dependabot compatibility score

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
    • @dependabot use these labels will set the current labels as the default for future PRs for this repo and language
    • @dependabot use these reviewers will set the current reviewers as the default for future PRs for this repo and language
    • @dependabot use these assignees will set the current assignees as the default for future PRs for this repo and language
    • @dependabot use this milestone will set the current milestone as the default for future PRs for this repo and language

    You can disable automated security fix PRs for this repo from the Security Alerts page.

    dependencies 
    opened by dependabot[bot] 0
  • 关于Leaf的install引入包模式启动,报java.lang.NoClassDefFoundError: org/apache/ibatis/cursor/Cursor问题

    关于Leaf的install引入包模式启动,报java.lang.NoClassDefFoundError: org/apache/ibatis/cursor/Cursor问题

    目前Spring版本2.6.x,通过mvn clean install -Dmaven.test.skip=true方式引入 leaf-boot-starter com.sankuai.inf.leaf 1.0.1-SNAPSHOT (至于为什么是1.0.1-SNAPSHOT是因为改了里面的RELEASE为SNAPSHOT) 然后启动显示报错: image leaf的spring版本5.2.9,spring-boot版本2.3.4.RELEASE,mysql-connect-java 8.0.13, druid 版本1.1.10 通过修改mybaits-spring版本和mybaits版本均不行

    opened by ccc-ju 0
  • Bump log4j-core from 2.7 to 2.17.1

    Bump log4j-core from 2.7 to 2.17.1

    Bumps log4j-core from 2.7 to 2.17.1.

    Dependabot compatibility score

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
    • @dependabot use these labels will set the current labels as the default for future PRs for this repo and language
    • @dependabot use these reviewers will set the current reviewers as the default for future PRs for this repo and language
    • @dependabot use these assignees will set the current assignees as the default for future PRs for this repo and language
    • @dependabot use this milestone will set the current milestone as the default for future PRs for this repo and language

    You can disable automated security fix PRs for this repo from the Security Alerts page.

    dependencies 
    opened by dependabot[bot] 0
  • why must set value before set max

    why must set value before set max

    ` //must set value before set max long value = leafAlloc.getMaxId() - buffer.getStep();

    segment.getValue().set(value);
    
    segment.setMax(leafAlloc.getMaxId());
    
    segment.setStep(buffer.getStep());
    

    ` why must set value before set max?

    opened by leechliao 0
Owner
美团
美团技术团队官方账号。
美团
Official repository of Trino, the distributed SQL query engine for big data, formerly known as PrestoSQL (https://trino.io)

Trino is a fast distributed SQL query engine for big data analytics. See the User Manual for deployment instructions and end user documentation. Devel

Trino 6.9k Dec 31, 2022
The official home of the Presto distributed SQL query engine for big data

Presto Presto is a distributed SQL query engine for big data. See the User Manual for deployment instructions and end user documentation. Requirements

Presto 14.3k Dec 30, 2022
CrateDB is a distributed SQL database that makes it simple to store and analyze massive amounts of machine data in real-time.

About CrateDB is a distributed SQL database that makes it simple to store and analyze massive amounts of machine data in real-time. CrateDB offers the

Crate.io 3.6k Jan 2, 2023
A distributed in-memory data store for the cloud

EVCache EVCache is a memcached & spymemcached based caching solution that is mainly used for AWS EC2 infrastructure for caching frequently used data.

Netflix, Inc. 1.9k Jan 2, 2023
Java implementation of Condensation - a zero-trust distributed database that ensures data ownership and data security

Java implementation of Condensation About Condensation enables to build modern applications while ensuring data ownership and security. It's a one sto

CondensationDB 43 Oct 19, 2022
A scalable, distributed Time Series Database.

___ _____ ____ ____ ____ / _ \ _ __ ___ _ _|_ _/ ___|| _ \| __ ) | | | | '_ \ / _ \ '_ \| | \___ \| | | | _ \

OpenTSDB 4.8k Dec 26, 2022
Apache Pinot - A realtime distributed OLAP datastore

What is Apache Pinot? Features When should I use Pinot? Building Pinot Deploying Pinot to Kubernetes Join the Community Documentation License What is

The Apache Software Foundation 4.4k Dec 30, 2022
Apache Drill is a distributed MPP query layer for self describing data

Apache Drill Apache Drill is a distributed MPP query layer that supports SQL and alternative query languages against NoSQL and Hadoop data storage sys

The Apache Software Foundation 1.8k Jan 7, 2023
HurricaneDB a real-time distributed OLAP engine, powered by Apache Pinot

HurricaneDB is a real-time distributed OLAP datastore, built to deliver scalable real-time analytics with low latency. It can ingest from batch data sources (such as Hadoop HDFS, Amazon S3, Azure ADLS, Google Cloud Storage) as well as stream data sources (such as Apache Kafka).

GuinsooLab 4 Dec 28, 2022
Distributed ID Generate Service

Leaf There are no two identical leaves in the world. — Leibnitz 中文文档 | English Document Introduction Leaf refers to some common ID generation schemes

美团 5.7k Dec 29, 2022
CoSky is a lightweight, low-cost service registration, service discovery, and configuration service SDK.

High-performance, low-cost microservice governance platform. Service Discovery and Configuration Service

Ahoo Wang 61 Nov 22, 2022
cglib - Byte Code Generation Library is high level API to generate and transform Java byte code. It is used by AOP, testing, data access frameworks to generate dynamic proxy objects and intercept field access.

cglib Byte Code Generation Library is high level API to generate and transform JAVA byte code. It is used by AOP, testing, data access frameworks to g

Code Generation Library 4.5k Jan 8, 2023
Development Driven Testing (DDT) lets you generate unit tests from a running application. Reproduce a bug, generate a properly mocked test

DDTJ: It kills bugs DDT is the flip side of TDD (Test-driven development). It stands for "Development Driven Tests". Notice that it doesn’t contradict

null 4 Dec 30, 2021
Sauron, the all seeing eye! It is a service to generate automated reports and track migrations, changes and dependency versions for backend services also report on known CVE and security issues.

SAURON - VERSION AND DEPLOYMENT TRACKER DESCRIPTION Sauron, the all seeing eye! It is a service to generate automated reports and track migrations, ch

FREENOWTech 20 Oct 31, 2022
JVM version of Pact. Enables consumer driven contract testing, providing a mock service and DSL for the consumer project, and interaction playback and verification for the service provider project.

pact-jvm JVM implementation of the consumer driven contract library pact. From the Ruby Pact website: Define a pact between service consumers and prov

Pact Foundation 962 Dec 31, 2022
Android service daemon ,keep background service alive

安卓后台保活2021新姿势 适配华为大部分系列手机,vivo,OPPO 部分机型,小米的不支持,可见小米在对抗后台自保上做得最好 本项目原本是给某个公司合作开发的,最后给了对方SDK之后由于付款问题闹得很郁闷,想着这个代码拿在自己手上也没用,就发出来给大家参考参考。目前分析的结果来看,这个是全网目前

null 65 Nov 29, 2022
Distributed and fault-tolerant realtime computation: stream processing, continuous computation, distributed RPC, and more

IMPORTANT NOTE!!! Storm has Moved to Apache. The official Storm git repository is now hosted by Apache, and is mirrored on github here: https://github

Nathan Marz 8.9k Dec 26, 2022
JHipster is a development platform to quickly generate, develop, & deploy modern web applications & microservice architectures.

Greetings, Java Hipster! Full documentation and information is available on our website at https://www.jhipster.tech/ Please read our guidelines befor

JHipster 20.2k Jan 5, 2023