A FlinkSQL studio and real-time computing platform based on Apache Flink

Overview

Dinky

简介

实时即未来,Dinky 为 Apache Flink 而生,让 Flink SQL 纵享丝滑,并致力于实时计算平台建设。

Dinky 架构于 Apache Flink,增强 Flink 的应用与体验,探索流式数仓。即站在巨人肩膀上创新与实践,Dinky 在未来批流一体的发展趋势下潜力无限。

最后,Dinky 的发展皆归功于 Apache Flink 等其他优秀的开源项目的指导与成果。

由来

Dinky(原 Dlink):

1.Dinky 英译为 “ 小巧而精致的 ” ,最直观的表明了它的特征:轻量级但又具备复杂的大数据开发能力。

2.为 “ Data Integrate No Knotty ” 的首字母组合,英译 “ 数据整合不难 ”,寓意 “ 易于建设批流一体平台及应用 ”。

3.从 Dlink 改名为 Dinky 过渡平滑,更加形象的阐明了开源项目的目标,始终指引参与者们 “不忘初心,方得始终 ”。

原理

功能

注意:以下功能均为对应版本已实现的功能,实测可用。

应用 方向 功能 进展
开发中心 FlinkSQL 支持 sql-client 所有语法 0.4.0
支持 Flink 所有 Configuration 0.4.0
支持 Flink 所有 Connector 0.4.0
支持 SELECT、SHOW、DESC 等查询实时预览 0.4.0
支持 INSERT 语句集 0.4.0
新增 SQL 片段语法 0.4.0
新增 AGGTABLE 表值聚合语法及 UDATF 支持 0.4.0
新增 CDCSOURCE 多源合并语法支持 0.6.0
新增 FlinkSQLEnv 执行环境复用 0.5.0
新增 Flink Catalog 交互查询 0.4.0
新增 执行环境的共享与私有会话机制 0.4.0
新增 多种方言的作业目录管理(FlinkSQL、SQL、Java) 0.5.0
新增 作业配置与执行配置管理 0.4.0
新增 基于 Explain 的语法校验与逻辑解析 0.4.0
新增 JobPlan 图预览 0.5.0
新增 基于 StreamGraph 的表级血缘分析 0.4.0
新增 基于上下文元数据自动提示与补全 0.4.0
新增 自定义规则的自动提示与补全 0.4.0
新增 关键字高亮与代码缩略图 0.4.0
新增 选中片段执行 0.4.0
新增 布局拖拽 0.4.0
新增 SQL导出 0.5.0
新增 快捷键保存、校验、美化 0.5.0
支持 local 模式下 FlinkSQL 提交 0.4.0
支持 standalone 模式下 FlinkSQL 提交 0.4.0
支持 yarn session 模式下 FlinkSQL 提交 0.4.0
支持 yarn per-job 模式下 FlinkSQL 提交 0.4.0
支持 yarn application 模式下 FlinkSQL 提交 0.4.0
支持 kubernetes session 模式下 FlinkSQL 提交 0.5.0
支持 kubernetes application 模式下 FlinkSQL 提交 0.5.0
支持 UDF Java 方言Local模式在线编写、调试、动态加载 0.5.0
Flink 作业 支持 yarn application 模式下 Jar 提交 0.4.0
支持 k8s application 模式下 Jar 提交 0.5.0
支持 作业 Cancel 0.4.0
支持 作业 SavePoint 的 Cancel、Stop、Trigger 0.4.0
新增 作业自动从 SavePoint 恢复机制(包含最近、最早、指定一次) 0.4.0
Flink 集群 支持 查看已注册集群的作业列表与运维 0.4.0
新增 自动注册 Yarn 创建的集群 0.4.0
SQL 新增 外部数据源的 SQL 校验 0.5.0
新增 外部数据源的 SQL 执行与预览 0.5.0
BI 新增 折线图的渲染 0.5.0
新增 条形图图的渲染 0.5.0
新增 饼图的渲染 0.5.0
元数据 新增 查询外部数据源的元数据信息 0.4.0
新增 FlinkSQL 和 SQL 的自动生成 0.6.0
归档 新增 执行与提交历史 0.4.0
运维中心 暂无 暂无 0.4.0
注册中心 Flink 集群实例 新增 外部 Flink 集群实例注册 0.4.0
新增 外部 Flink 集群实例心态检测与版本获取 0.4.0
新增 外部 Flink 集群手动一键回收 0.4.0
Flink 集群配置 新增 Flink On Yarn 集群配置注册及测试 0.4.0
User Jar 新增 外部 User Jar 注册 0.4.0
数据源 新增 Mysql 数据源注册及测试 0.4.0
新增 Oracle 数据源注册及测试 0.4.0
新增 postgreSql 数据源注册及测试 0.4.0
新增 ClickHouse 数据源注册及测试 0.4.0
OpenApi 调度 新增 submitTask 调度接口 0.5.0
FlinkSQL 新增 executeSql 提交接口 0.5.0
新增 explainSql 验证接口 0.5.0
新增 getJobPlan 计划接口 0.5.0
新增 getStreamGraph 计划接口 0.5.0
新增 getJobData 数据接口 0.5.0
Flink 新增 executeJar 提交接口 0.5.0
新增 cancel 停止接口 0.5.0
新增 savepoint 触发接口 0.5.0
关于 关于 Dlink 版本更新记录 0.4.0

部署

版本

抢先体验( main 主支):dlink-0.6.0-SNAPSHOT

稳定版本( 0.5.1 分支):dlink-0.5.1

从安装包开始

config/ -- 配置文件
|- application.yml
extends/ -- 扩展
|- dlink-client-1.11.jar
|- dlink-client-1.12.jar
|- dlink-client-1.14.jar
html/ -- 前端编译产物,用于Nginx
jar/ -- dlink application模式提交sql用到的jar
lib/ -- 内部组件
|- dlink-client-1.13.jar -- 必需
|- dlink-connector-jdbc.jar
|- dlink-function.jar
|- dlink-metadata-clickhouse.jar
|- dlink-metadata-mysql.jar
|- dlink-metadata-oracle.jar
|- dlink-metadata-postgresql.jar
plugins/
|- flink-connector-jdbc_2.11-1.13.3.jar
|- flink-csv-1.13.3.jar
|- flink-dist_2.11-1.13.3.jar
|- flink-json-1.13.3.jar
|- flink-shaded-hadoop-3-uber-3.1.1.7.2.1.0-327-9.0.jar
|- flink-shaded-zookeeper-3.4.14.jar
|- flink-table-blink_2.11-1.13.3.jar
|- flink-table_2.11-1.13.3.jar
|- mysql-connector-java-8.0.21.jar
sql/ 
|- dlink.sql -- Mysql初始化脚本(首次部署执行这个)
|- dlink_history.sql -- Mysql各版本及时间点升级脚本
auto.sh --启动停止脚本
dlink-admin.jar --程序包

解压后结构如上所示,修改配置文件内容。lib 文件夹下存放 dlink 自身的扩展文件,plugins 文件夹下存放 flink 及 hadoop 的官方扩展文件( 如果plugins下引入了flink-shaded-hadoop-3-uber 或者其他可能冲突的jar,请手动删除内部的 javax.servlet 等冲突内容)。其中 plugins 中的所有 jar 需要根据版本号自行下载并添加,才能体验完整功能,当然也可以放自己修改的 Flink 源码编译包。extends 文件夹只作为扩展插件的备份管理,不会被 dlink 加载。

请检查 plugins 下是否添加了 flink 对应版本的 flink-dist,flink-table,flink-shaded-hadoop-3-uber 等如上所示的依赖!!! 请检查 plugins 下是否添加了 flink 对应版本的 flink-dist,flink-table,flink-shaded-hadoop-3-uber 等如上所示的依赖!!! 请检查 plugins 下是否添加了 flink 对应版本的 flink-dist,flink-table,flink-shaded-hadoop-3-uber 等如上所示的依赖!!!

在Mysql数据库中创建 dlink 数据库并执行初始化脚本 dlink.sql。

执行以下命令管理应用。

sh auto.sh start
sh auto.sh stop
sh auto.sh restart
sh auto.sh status

前端快捷访问: 如果plugins下引入了flink-shaded-hadoop-3-uber 的jar,请手动删除内部的 javax.servlet 后既可以访问默认 8888 端口号(如127.0.0.1:8888),正常打开前端页面。

前后端分离部署—— Nginx 部署(推荐): Nginx 如何部署请见百度或谷歌。 将 html 文件夹上传至 nginx 的 html 文件夹下或者指定 nginx 配置文件的静态资源绝对路径,修改 nginx 配置文件并重启。

    server {
        listen       9999;
        server_name  localhost;

		gzip on;
		gzip_min_length 1k;
		gzip_comp_level 9;
		gzip_types text/plain application/javascript application/x-javascript text/css application/xml text/javascript application/x-httpd-php image/jpeg image/gif image/png;
		gzip_vary on;
		gzip_disable "MSIE [1-6]\.";

        location / {
            root   html;
            index  index.html index.htm;
			try_files $uri $uri/ /index.html;
        }

        error_page   500 502 503 504  /50x.html;
        location = /50x.html {
            root   html;
        }

        location ^~ /api {
            proxy_pass http://127.0.0.1:8888;
            proxy_set_header   X-Forwarded-Proto $scheme;
            proxy_set_header   X-Real-IP         $remote_addr;
        }
    }
  1. server.listen 填写前端访问端口
  2. proxy_pass 填写后端地址如 http://127.0.0.1:8888
  3. 将 html 文件夹下打包好的前端资源上传到 nginx 的 html 文件夹中,如果 nginx 已经启动,则执行 nginx -s reload 重载配置,访问即可。

从源码编译

项目目录

dlink -- 父项目
|-dlink-admin -- 管理中心
|-dlink-app -- Application Jar
|-dlink-assembly -- 打包配置
|-dlink-client -- Client 中心
| |-dlink-client-1.11 -- Client-1.11 实现
| |-dlink-client-1.12 -- Client-1.12 实现
| |-dlink-client-1.13 -- Client-1.13 实现
| |-dlink-client-1.14 -- Client-1.14 实现
|-dlink-common -- 通用中心
|-dlink-connectors -- Connectors 中心
| |-dlink-connector-jdbc -- Jdbc 扩展
|-dlink-core -- 执行中心
|-dlink-doc -- 文档
| |-bin -- 启动脚本
| |-config -- 配置文件
| |-doc -- 使用文档
| |-extends -- Docker K8S模板
| |-sql -- sql脚本
|-dlink-executor -- 执行中心
|-dlink-extends -- 扩展中心
|-dlink-function -- 函数中心
|-dlink-gateway -- Flink 网关中心
|-dlink-metadata -- 元数据中心
| |-dlink-metadata-base -- 元数据基础组件
| |-dlink-metadata-clickhouse -- 元数据- clickhouse 实现
| |-dlink-metadata-mysql -- 元数据- mysql 实现
| |-dlink-metadata-oracle -- 元数据- oracle 实现
| |-dlink-metadata-postgresql -- 元数据- postgresql 实现
|-dlink-web -- React 前端

编译打包

以下环境版本实测编译成功:

环境 版本
npm 7.19.0
node.js 14.17.0
jdk 1.8.0_201
maven 3.6.0
lombok 1.18.16
mysql 5.7+
mvn clean install -Dmaven.test.skip=true

扩展Connector及UDF

将 Flink 集群上已扩展好的 Connector 和 UDF 直接放入 Dlink 的 plugins 下,然后重启即可。定制 Connector 过程同 Flink 官方一样。

扩展Metadata

遵循SPI。请参考 dlink-meta-mysql 的实现。

扩展其他版本的Flink

Flink 的版本取决于 lib 下的 dlink-client-1.13.jar。当前版本默认为 Flink 1.13.5 API。向其他版本的集群提交任务可能存在问题,已实现 1.11、1.12、1.13, 1.14,切换版本时只需要将对应依赖在lib下进行替换,然后重启即可。

切换版本时需要同时更新 plugins 下的 Flink 依赖。

使用手册

1.Flink AggTable 在 Dlink 的实践

2.Dlink 概念原理与源码扩展介绍

3.Dlink-0.3.0重磅来袭

4.Dlink 实时计算平台——部署篇

5.Dlink-0.3.2更新说明

6.Dlink 读写 Hive 的实践

7.Dlink On Yarn 三种 Flink 执行方式的实践

8.Dlink 在 Flink-mysql-cdc 到 Doris 的实践

技术栈

Apache Flink

Mybatis Plus

ant-design-pro

Monaco Editor

SpringBoot

致谢

感谢 JetBrains 提供的免费开源 License 赞助

JetBrains

近期计划

1.任务生命周期管理

2.作业监控及运维

3.流作业自动恢复

4.作业日志查看

5.钉钉报警和推送

交流与贡献

欢迎您加入社区交流分享,也欢迎您为社区贡献自己的力量。

在此非常感谢大家的支持~

QQ社区群:543709668,申请备注 “ Dinky ”,不写不批

微信社区群(推荐):添加微信号 wenmo_ai 邀请进群,申请备注 “ Dinky + 企业名 + 职位”,不写不批

公众号(最新消息获取建议关注):DataLink数据中台

运行截图

登录页

首页

FlinkSQL Studio

自动补全

ChangeLog 预览

BI 折线图

Table 预览

语法校验和逻辑检查

JobPlan 预览

FlinkSQL 导出

血缘分析

Savepoint 管理

共享会话

元数据

集群实例

集群配置

Comments
  • Failed to submit the yarn application task

    Failed to submit the yarn application task

    Search before asking

    • [X] I had searched in the issues and found no similar issues.

    What happened

    I have registered the cluster successfully 图片 but,When I submit a task asynchronously, an error is reported 图片 [dlink] 2022-08-16 17:59:52 CST WARN org.springframework.core.log.CompositeLog 127 warn - Failed to evaluate Jackson deserialization for type [[simple type, class com.dlink.model.Task]]: org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'com.fasterxml.jackson.datatype.jsr310.deser.LocalDateTimeDeserializer': Lookup method resolution failed; nested exception is java.lang.IllegalStateException: Failed to introspect Class [com.fasterxml.jackson.datatype.jsr310.deser.JSR310DateTimeDeserializerBase] from ClassLoader [org.springframework.boot.loader.LaunchedURLClassLoader@2b71fc7e]

    What you expected to happen

    Have no clue

    How to reproduce

    http://www.dlink.top/docs/extend/practice_guide/yarnsubmit yarn-application

    flink-1.14.4 hadoop-3.3.1 Compile and generate dlink-client-hadoop-0.6.6.jar

    Anything else

    No response

    Version

    0.6.6

    Are you willing to submit PR?

    • [ ] Yes I am willing to submit a PR!

    Code of Conduct

    bug 
    opened by ynzzxc 16
  • [Bug] [Savepoint] 通过Savepoint重新启动失败

    [Bug] [Savepoint] 通过Savepoint重新启动失败

    Search before asking

    • [X] I had searched in the issues and found no similar issues.

    What happened

    image

    What you expected to happen

    怎么解决这个问题?

    How to reproduce

    flink。。。。。。。。。

    Anything else

    No response

    Version

    0.6.3-SNAPSHOT

    Are you willing to submit PR?

    • [X] Yes I am willing to submit a PR!

    Code of Conduct

    bug 
    opened by roy-long 14
  • 编译报错问题!

    编译报错问题!

    你好,我是在Windows10下编译的,环境保持一致,到dlink-web的时候报如下错误: [ERROR] Failed to execute goal org.codehaus.mojo:exec-maven-plugin:1.6.0:exec (exec-npm-install) on project dlink-web: Command execution failed.: Process exited with an error: 1 (Exit value: 1) -> [Help 1] org.apache.maven.lifecycle.LifecycleExecutionException: Failed to execute goal org.codehaus.mojo:exec-maven-plugin:1.6.0:exec (exec-npm-install) on project dlink-web: Command execution failed.

    有空回复一下,谢谢

    opened by A-little-bit-of-data 14
  • reformat jobmanager.java file, improve code readability and some other trick refaction.

    reformat jobmanager.java file, improve code readability and some other trick refaction.

    reformat jobmanager.java file, improve code readability, and do some other trick refactoring. jobmanager.java execute SQL function too long, split it,

    opened by leechor 13
  • [Bug] [FLink SQL YARN]  Flink sql yarn application  configure cluster error

    [Bug] [FLink SQL YARN] Flink sql yarn application configure cluster error

    Search before asking

    • [X] I had searched in the issues and found no similar issues.

    What happened

    When I configure the yarn application and push the 'test' button , I got an error. I have put the dlink-app-1.14-0.6.5-jar-with-dependencies.jar file into hdfs dir. I have configured hadoop_home in auto.sh.

    image

    What you expected to happen

    Caused by: java.util.ServiceConfigurationError: com.dlink.gateway.Gateway: Provider com.dlink.gateway.yarn.YarnApplicationGateway could not be instantiated at java.util.ServiceLoader.fail(ServiceLoader.java:232) ~[?:1.8.0_181] at java.util.ServiceLoader.access$100(ServiceLoader.java:185) ~[?:1.8.0_181] at java.util.ServiceLoader$LazyIterator.nextService(ServiceLoader.java:384) ~[?:1.8.0_181] at java.util.ServiceLoader$LazyIterator.next(ServiceLoader.java:404) ~[?:1.8.0_181] at java.util.ServiceLoader$1.next(ServiceLoader.java:480) ~[?:1.8.0_181] at com.dlink.gateway.Gateway.get(Gateway.java:29) ~[dlink-gateway-0.6.5.jar!/:?] at com.dlink.gateway.Gateway.build(Gateway.java:39) ~[dlink-gateway-0.6.5.jar!/:?] at com.dlink.job.JobManager.testGateway(JobManager.java:537) ~[dlink-core-0.6.5.jar!/:?] at com.dlink.service.impl.ClusterConfigurationServiceImpl.testGateway(ClusterConfigurationServiceImpl.java:80) ~[classes!/:?] at com.dlink.service.impl.ClusterConfigurationServiceImpl$$FastClassBySpringCGLIB$$4d2e2281.invoke() ~[classes!/:?] at org.springframework.cglib.proxy.MethodProxy.invoke(MethodProxy.java:218) ~[spring-core-5.3.15.jar!/:5.3.15] at org.springframework.aop.framework.CglibAopProxy$DynamicAdvisedInterceptor.intercept(CglibAopProxy.java:689) ~[spring-aop-5.3.15.jar!/:5.3.15] at com.dlink.service.impl.ClusterConfigurationServiceImpl$$EnhancerBySpringCGLIB$$fdd1b0a1.testGateway() ~[classes!/:?] at com.dlink.controller.ClusterConfigurationController.testConnect(ClusterConfigurationController.java:104) ~[classes!/:?] at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[?:1.8.0_181] at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) ~[?:1.8.0_181] at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[?:1.8.0_181] at java.lang.reflect.Method.invoke(Method.java:498) ~[?:1.8.0_181] at org.springframework.web.method.support.InvocableHandlerMethod.doInvoke(InvocableHandlerMethod.java:205) ~[spring-web-5.3.15.jar!/:5.3.15] at org.springframework.web.method.support.InvocableHandlerMethod.invokeForRequest(InvocableHandlerMethod.java:150) ~[spring-web-5.3.15.jar!/:5.3.15] at org.springframework.web.servlet.mvc.method.annotation.ServletInvocableHandlerMethod.invokeAndHandle(ServletInvocableHandlerMethod.java:117) ~[spring-webmvc-5.3.15.jar!/:5.3.15] at org.springframework.web.servlet.mvc.method.annotation.RequestMappingHandlerAdapter.invokeHandlerMethod(RequestMappingHandlerAdapter.java:895) ~[spring-webmvc-5.3.15.jar!/:5.3.15] at org.springframework.web.servlet.mvc.method.annotation.RequestMappingHandlerAdapter.handleInternal(RequestMappingHandlerAdapter.java:808) ~[spring-webmvc-5.3.15.jar!/:5.3.15] at org.springframework.web.servlet.mvc.method.AbstractHandlerMethodAdapter.handle(AbstractHandlerMethodAdapter.java:87) ~[spring-webmvc-5.3.15.jar!/:5.3.15] at org.springframework.web.servlet.DispatcherServlet.doDispatch(DispatcherServlet.java:1067) [spring-webmvc-5.3.15.jar!/:5.3.15] ... 42 more Caused by: java.lang.NoClassDefFoundError: org/apache/flink/client/deployment/ClusterRetrieveException at java.lang.Class.getDeclaredConstructors0(Native Method) ~[?:1.8.0_181] at java.lang.Class.privateGetDeclaredConstructors(Class.java:2671) ~[?:1.8.0_181] at java.lang.Class.getConstructor0(Class.java:3075) ~[?:1.8.0_181] at java.lang.Class.newInstance(Class.java:412) ~[?:1.8.0_181] at java.util.ServiceLoader$LazyIterator.nextService(ServiceLoader.java:380) ~[?:1.8.0_181] at java.util.ServiceLoader$LazyIterator.next(ServiceLoader.java:404) ~[?:1.8.0_181] at java.util.ServiceLoader$1.next(ServiceLoader.java:480) ~[?:1.8.0_181] at com.dlink.gateway.Gateway.get(Gateway.java:29) ~[dlink-gateway-0.6.5.jar!/:?] at com.dlink.gateway.Gateway.build(Gateway.java:39) ~[dlink-gateway-0.6.5.jar!/:?] at com.dlink.job.JobManager.testGateway(JobManager.java:537) ~[dlink-core-0.6.5.jar!/:?] at com.dlink.service.impl.ClusterConfigurationServiceImpl.testGateway(ClusterConfigurationServiceImpl.java:80) ~[classes!/:?] at com.dlink.service.impl.ClusterConfigurationServiceImpl$$FastClassBySpringCGLIB$$4d2e2281.invoke() ~[classes!/:?] at org.springframework.cglib.proxy.MethodProxy.invoke(MethodProxy.java:218) ~[spring-core-5.3.15.jar!/:5.3.15] at org.springframework.aop.framework.CglibAopProxy$DynamicAdvisedInterceptor.intercept(CglibAopProxy.java:689) ~[spring-aop-5.3.15.jar!/:5.3.15] at com.dlink.service.impl.ClusterConfigurationServiceImpl$$EnhancerBySpringCGLIB$$fdd1b0a1.testGateway() ~[classes!/:?] at com.dlink.controller.ClusterConfigurationController.testConnect(ClusterConfigurationController.java:104) ~[classes!/:?] at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[?:1.8.0_181] at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) ~[?:1.8.0_181] at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[?:1.8.0_181] at java.lang.reflect.Method.invoke(Method.java:498) ~[?:1.8.0_181] at org.springframework.web.method.support.InvocableHandlerMethod.doInvoke(InvocableHandlerMethod.java:205) ~[spring-web-5.3.15.jar!/:5.3.15] at org.springframework.web.method.support.InvocableHandlerMethod.invokeForRequest(InvocableHandlerMethod.java:150) ~[spring-web-5.3.15.jar!/:5.3.15] at org.springframework.web.servlet.mvc.method.annotation.ServletInvocableHandlerMethod.invokeAndHandle(ServletInvocableHandlerMethod.java:117) ~[spring-webmvc-5.3.15.jar!/:5.3.15] at org.springframework.web.servlet.mvc.method.annotation.RequestMappingHandlerAdapter.invokeHandlerMethod(RequestMappingHandlerAdapter.java:895) ~[spring-webmvc-5.3.15.jar!/:5.3.15] at org.springframework.web.servlet.mvc.method.annotation.RequestMappingHandlerAdapter.handleInternal(RequestMappingHandlerAdapter.java:808) ~[spring-webmvc-5.3.15.jar!/:5.3.15] at org.springframework.web.servlet.mvc.method.AbstractHandlerMethodAdapter.handle(AbstractHandlerMethodAdapter.java:87) ~[spring-webmvc-5.3.15.jar!/:5.3.15] at org.springframework.web.servlet.DispatcherServlet.doDispatch(DispatcherServlet.java:1067) [spring-webmvc-5.3.15.jar!/:5.3.15] ... 42 more Caused by: java.lang.ClassNotFoundException: org.apache.flink.client.deployment.ClusterRetrieveException at java.net.URLClassLoader.findClass(URLClassLoader.java:381) ~[?:1.8.0_181] at java.lang.ClassLoader.loadClass(ClassLoader.java:424) ~[?:1.8.0_181] at org.springframework.boot.loader.LaunchedURLClassLoader.loadClass(LaunchedURLClassLoader.java:151) ~[dlink-admin-0.6.5.jar:?] at java.lang.ClassLoader.loadClass(ClassLoader.java:357) ~[?:1.8.0_181] at java.lang.Class.getDeclaredConstructors0(Native Method) ~[?:1.8.0_181] at java.lang.Class.privateGetDeclaredConstructors(Class.java:2671) ~[?:1.8.0_181] at java.lang.Class.getConstructor0(Class.java:3075) ~[?:1.8.0_181] at java.lang.Class.newInstance(Class.java:412) ~[?:1.8.0_181] at java.util.ServiceLoader$LazyIterator.nextService(ServiceLoader.java:380) ~[?:1.8.0_181] at java.util.ServiceLoader$LazyIterator.next(ServiceLoader.java:404) ~[?:1.8.0_181] at java.util.ServiceLoader$1.next(ServiceLoader.java:480) ~[?:1.8.0_181] at com.dlink.gateway.Gateway.get(Gateway.java:29) ~[dlink-gateway-0.6.5.jar!/:?] at com.dlink.gateway.Gateway.build(Gateway.java:39) ~[dlink-gateway-0.6.5.jar!/:?] at com.dlink.job.JobManager.testGateway(JobManager.java:537) ~[dlink-core-0.6.5.jar!/:?] at com.dlink.service.impl.ClusterConfigurationServiceImpl.testGateway(ClusterConfigurationServiceImpl.java:80) ~[classes!/:?] at com.dlink.service.impl.ClusterConfigurationServiceImpl$$FastClassBySpringCGLIB$$4d2e2281.invoke() ~[classes!/:?] at org.springframework.cglib.proxy.MethodProxy.invoke(MethodProxy.java:218) ~[spring-core-5.3.15.jar!/:5.3.15] at org.springframework.aop.framework.CglibAopProxy$DynamicAdvisedInterceptor.intercept(CglibAopProxy.java:689) ~[spring-aop-5.3.15.jar!/:5.3.15] at com.dlink.service.impl.ClusterConfigurationServiceImpl$$EnhancerBySpringCGLIB$$fdd1b0a1.testGateway() ~[classes!/:?] at com.dlink.controller.ClusterConfigurationController.testConnect(ClusterConfigurationController.java:104) ~[classes!/:?] at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[?:1.8.0_181] at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) ~[?:1.8.0_181] at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[?:1.8.0_181] at java.lang.reflect.Method.invoke(Method.java:498) ~[?:1.8.0_181] at org.springframework.web.method.support.InvocableHandlerMethod.doInvoke(InvocableHandlerMethod.java:205) ~[spring-web-5.3.15.jar!/:5.3.15] at org.springframework.web.method.support.InvocableHandlerMethod.invokeForRequest(InvocableHandlerMethod.java:150) ~[spring-web-5.3.15.jar!/:5.3.15] at org.springframework.web.servlet.mvc.method.annotation.ServletInvocableHandlerMethod.invokeAndHandle(ServletInvocableHandlerMethod.java:117) ~[spring-webmvc-5.3.15.jar!/:5.3.15] at org.springframework.web.servlet.mvc.method.annotation.RequestMappingHandlerAdapter.invokeHandlerMethod(RequestMappingHandlerAdapter.java:895) ~[spring-webmvc-5.3.15.jar!/:5.3.15] at org.springframework.web.servlet.mvc.method.annotation.RequestMappingHandlerAdapter.handleInternal(RequestMappingHandlerAdapter.java:808) ~[spring-webmvc-5.3.15.jar!/:5.3.15] at org.springframework.web.servlet.mvc.method.AbstractHandlerMethodAdapter.handle(AbstractHandlerMethodAdapter.java:87) ~[spring-webmvc-5.3.15.jar!/:5.3.15] at org.springframework.web.servlet.DispatcherServlet.doDispatch(DispatcherServlet.java:1067) ~[spring-webmvc-5.3.15.jar!/:5.3.15] ... 42 more

    How to reproduce

    Just do like what I said.

    Anything else

    No response

    Version

    0.6.5

    Are you willing to submit PR?

    • [X] Yes I am willing to submit a PR!

    Code of Conduct

    bug 
    opened by lpn666 13
  • [Bug] [Module Name] Bug title  com.dlink.exception.MetaDataException: 不支持数据源类型【ClickHouse】,请在 lib 下添加扩展依赖

    [Bug] [Module Name] Bug title com.dlink.exception.MetaDataException: 不支持数据源类型【ClickHouse】,请在 lib 下添加扩展依赖

    Search before asking

    • [X] I had searched in the issues and found no similar issues.

    What happened

    com.dlink.exception.MetaDataException: 不支持数据源类型【ClickHouse】,请在 lib 下添加扩展依赖

    What you expected to happen

    已经在plugins 和lib目录添加了 clickhouse-jdbc-0.2.6.jar,还需要添加什么包?

    How to reproduce

    数据源添加clickhouse

    Anything else

    No response

    Version

    0.6.1-SNAPSHOT

    Are you willing to submit PR?

    • [ ] Yes I am willing to submit a PR!

    Code of Conduct

    bug 
    opened by wang316902972 13
  • org.apache.flink.util.FlinkException: The file LOG is not available on the TaskExecutor

    org.apache.flink.util.FlinkException: The file LOG is not available on the TaskExecutor

    Search before asking

    • [X] I had searched in the issues and found no similar issues.

    What happened

    java.util.concurrent.CompletionException: org.apache.flink.util.FlinkException: The file LOG is not available on the TaskExecutor. at java.util.concurrent.CompletableFuture.encodeThrowable(CompletableFuture.java:292) at java.util.concurrent.CompletableFuture.completeThrowable(CompletableFuture.java:308) at java.util.concurrent.CompletableFuture.uniAccept(CompletableFuture.java:661) at java.util.concurrent.CompletableFuture$UniAccept.tryFire(CompletableFuture.java:646) at java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:488) at java.util.concurrent.CompletableFuture.completeExceptionally(CompletableFuture.java:1990) at org.apache.flink.runtime.rpc.akka.AkkaInvocationHandler.lambda$invokeRpc$1(AkkaInvocationHandler.java:251) at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:774) at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:750) at java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:488) at java.util.concurrent.CompletableFuture.completeExceptionally(CompletableFuture.java:1990) at org.apache.flink.util.concurrent.FutureUtils.doForward(FutureUtils.java:1387) at org.apache.flink.runtime.concurrent.akka.ClassLoadingUtils.lambda$null$1(ClassLoadingUtils.java:93) at org.apache.flink.runtime.concurrent.akka.ClassLoadingUtils.runWithContextClassLoader(ClassLoadingUtils.java:68) at org.apache.flink.runtime.concurrent.akka.ClassLoadingUtils.lambda$guardCompletionWithContextClassLoader$2(ClassLoadingUtils.java:92) at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:774) at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:750) at java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:488) at java.util.concurrent.CompletableFuture.completeExceptionally(CompletableFuture.java:1990) at org.apache.flink.runtime.concurrent.akka.AkkaFutureUtils$1.onComplete(AkkaFutureUtils.java:45) at akka.dispatch.OnComplete.internal(Future.scala:299) at akka.dispatch.OnComplete.internal(Future.scala:297) at akka.dispatch.japi$CallbackBridge.apply(Future.scala:224) at akka.dispatch.japi$CallbackBridge.apply(Future.scala:221) at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:60) at org.apache.flink.runtime.concurrent.akka.AkkaFutureUtils$DirectExecutionContext.execute(AkkaFutureUtils.java:65) at scala.concurrent.impl.CallbackRunnable.executeWithValue(Promise.scala:68) at scala.concurrent.impl.Promise$DefaultPromise.$anonfun$tryComplete$1(Promise.scala:284) at scala.concurrent.impl.Promise$DefaultPromise.$anonfun$tryComplete$1$adapted(Promise.scala:284) at scala.concurrent.impl.Promise$DefaultPromise.tryComplete(Promise.scala:284) at akka.pattern.PromiseActorRef.$bang(AskSupport.scala:621) at akka.pattern.PipeToSupport$PipeableFuture$$anonfun$pipeTo$1.applyOrElse(PipeToSupport.scala:25) at akka.pattern.PipeToSupport$PipeableFuture$$anonfun$pipeTo$1.applyOrElse(PipeToSupport.scala:23) at scala.concurrent.Future.$anonfun$andThen$1(Future.scala:532) at scala.concurrent.impl.Promise.liftedTree1$1(Promise.scala:29) at scala.concurrent.impl.Promise.$anonfun$transform$1(Promise.scala:29) at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:60) at akka.dispatch.BatchingExecutor$AbstractBatch.processBatch(BatchingExecutor.scala:63) at akka.dispatch.BatchingExecutor$BlockableBatch.$anonfun$run$1(BatchingExecutor.scala:100) at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:12) at scala.concurrent.BlockContext$.withBlockContext(BlockContext.scala:81) at akka.dispatch.BatchingExecutor$BlockableBatch.run(BatchingExecutor.scala:100) at akka.dispatch.TaskInvocation.run(AbstractDispatcher.scala:49) at akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(ForkJoinExecutorConfigurator.scala:48) at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:289) at java.util.concurrent.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1067) at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1703) at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:172) Caused by: org.apache.flink.util.FlinkException: The file LOG is not available on the TaskExecutor.

    What you expected to happen

    Have no clue

    How to reproduce

    dlink-0.6.6 yarn-application flink-1.14.4 dlink-client-hadoop.jar替代 uber包

    web ui无日志,hdfs有jobmanager日志,job会挂掉; 如果将log4j.properties放入flink/conf或者添加参数$internal.deployment.config-dir和$internal.yarn.log-config-file,web ui无日志,hdfs也没jobmanager日志详情,仅有“任务提交成功”,job会挂掉

    Anything else

    No response

    Version

    0.6.6

    Are you willing to submit PR?

    • [ ] Yes I am willing to submit a PR!

    Code of Conduct

    bug 
    opened by ynzzxc 10
  • [Feature][admin] 权限管理-数据源的维护页面增加权限管理

    [Feature][admin] 权限管理-数据源的维护页面增加权限管理

    Search before asking

    • [X] I had searched in the issues and found no similar feature requirement.

    Description

    目标: 防止平台的开发者通过平台获取数据源相关的密码。

    通过权限管理,防止平台使用者通过数据源的相关页面和接口获取密码。

    1. 用户管理中引入权限管理,用户表扩展一个权限集合字段。
    2. 添加一个权限:数据源的增删改查权限,默认所有用户拥有此权限,只有超级管理员可以管理所有用户的该权限。
    3. 只有拥有数据源的增删改查权限的用户才能进入数据源相关的页面。

    平台对涉及返回flinksql的接口进行脱敏处理,将password隐藏。

    1. 语法检查接口: api/explainSql
    2. 历史:flink语句:api/history
    3. 历史:异常信息:api/history

    这样,开发者在编写job的时候通过变量引用的方式引用数据源,就可以避免数据库的密码被其他使用者看到。

    Use case

    No response

    Related issues

    No response

    Are you willing to submit a PR?

    • [X] Yes I am willing to submit a PR!

    Code of Conduct

    new feature 
    opened by dzygcc 10
  • [Bug] [运维中心] 在dinky正常运行,作业发布后,如果作业一直running,但是flink集群关闭情况下,运维中心重启后获取到作业状态一直running

    [Bug] [运维中心] 在dinky正常运行,作业发布后,如果作业一直running,但是flink集群关闭情况下,运维中心重启后获取到作业状态一直running

    Search before asking

    • [X] I had searched in the issues and found no similar issues.

    What happened

    在dinky正常运行,作业发布后,如果作业状态一直running,但是flink集群关闭情况下,运维中心重启后获取到作业状态一直running。

    What you expected to happen

    希望运维中心的状态根据实时的进行,同时支持以dinky端数据为主,即如果dinky原来的作业状态为running状态,但是flink集群没有对应job或者已经关闭,那么dinky可以自动重启一个job。

    How to reproduce

    直接在dinky-admin web端提交任务,然后直接关闭flink集群,然后重新启动dinky就可以复现。

    Anything else

    No response

    Version

    0.6.5

    Are you willing to submit PR?

    • [ ] Yes I am willing to submit a PR!

    Code of Conduct

    bug 
    opened by qzgt 8
  • Dev

    Dev

    Purpose of the pull request

    Brief change log

    Verify this pull request

    This pull request is code cleanup without any test coverage.

    (or)

    This pull request is already covered by existing tests, such as (please describe tests).

    (or)

    This change added tests and can be verified as follows:

    opened by gaopan-05 8
  • 提交cdc任务到集群报错

    提交cdc任务到集群报错

    同样的任务,local模式下执行成功,flink sql client提交也成功,dlink提交报错,日志如下: 2022-03-21 20:18:18,249 ERROR org.apache.flink.runtime.entrypoint.ClusterEntrypoint [] - Fatal error occurred in the cluster entrypoint. org.apache.flink.util.FlinkException: JobMaster for job bc81799a814721461992df5ae06bda8e failed. at org.apache.flink.runtime.dispatcher.Dispatcher.jobMasterFailed(Dispatcher.java:892) ~[flink-dist_2.11-1.13.6.jar:1.13.6] at org.apache.flink.runtime.dispatcher.Dispatcher.jobManagerRunnerFailed(Dispatcher.java:459) ~[flink-dist_2.11-1.13.6.jar:1.13.6] at org.apache.flink.runtime.dispatcher.Dispatcher.handleJobManagerRunnerResult(Dispatcher.java:436) ~[flink-dist_2.11-1.13.6.jar:1.13.6] at org.apache.flink.runtime.dispatcher.Dispatcher.lambda$runJob$3(Dispatcher.java:415) ~[flink-dist_2.11-1.13.6.jar:1.13.6] at java.util.concurrent.CompletableFuture.uniHandle(CompletableFuture.java:822) ~[?:1.8.0_181] at java.util.concurrent.CompletableFuture$UniHandle.tryFire(CompletableFuture.java:797) ~[?:1.8.0_181] at java.util.concurrent.CompletableFuture$Completion.run(CompletableFuture.java:442) ~[?:1.8.0_181] at org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleRunAsync(AkkaRpcActor.java:440) ~[flink-dist_2.11-1.13.6.jar:1.13.6] at org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleRpcMessage(AkkaRpcActor.java:208) ~[flink-dist_2.11-1.13.6.jar:1.13.6] at org.apache.flink.runtime.rpc.akka.FencedAkkaRpcActor.handleRpcMessage(FencedAkkaRpcActor.java:77) ~[flink-dist_2.11-1.13.6.jar:1.13.6] at org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleMessage(AkkaRpcActor.java:158) ~[flink-dist_2.11-1.13.6.jar:1.13.6] at akka.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:26) [flink-dist_2.11-1.13.6.jar:1.13.6] at akka.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:21) [flink-dist_2.11-1.13.6.jar:1.13.6] at scala.PartialFunction$class.applyOrElse(PartialFunction.scala:123) [flink-dist_2.11-1.13.6.jar:1.13.6] at akka.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:21) [flink-dist_2.11-1.13.6.jar:1.13.6] at scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:170) [flink-dist_2.11-1.13.6.jar:1.13.6] at scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:171) [flink-dist_2.11-1.13.6.jar:1.13.6] at scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:171) [flink-dist_2.11-1.13.6.jar:1.13.6] at akka.actor.Actor$class.aroundReceive(Actor.scala:517) [flink-dist_2.11-1.13.6.jar:1.13.6] at akka.actor.AbstractActor.aroundReceive(AbstractActor.scala:225) [flink-dist_2.11-1.13.6.jar:1.13.6] at akka.actor.ActorCell.receiveMessage(ActorCell.scala:592) [flink-dist_2.11-1.13.6.jar:1.13.6] at akka.actor.ActorCell.invoke(ActorCell.scala:561) [flink-dist_2.11-1.13.6.jar:1.13.6] at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:258) [flink-dist_2.11-1.13.6.jar:1.13.6] at akka.dispatch.Mailbox.run(Mailbox.scala:225) [flink-dist_2.11-1.13.6.jar:1.13.6] at akka.dispatch.Mailbox.exec(Mailbox.scala:235) [flink-dist_2.11-1.13.6.jar:1.13.6] at akka.dispatch.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260) [flink-dist_2.11-1.13.6.jar:1.13.6] at akka.dispatch.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339) [flink-dist_2.11-1.13.6.jar:1.13.6] at akka.dispatch.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979) [flink-dist_2.11-1.13.6.jar:1.13.6] at akka.dispatch.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107) [flink-dist_2.11-1.13.6.jar:1.13.6] Caused by: org.apache.flink.runtime.client.JobInitializationException: Could not start the JobMaster. at org.apache.flink.runtime.jobmaster.DefaultJobMasterServiceProcess.lambda$new$0(DefaultJobMasterServiceProcess.java:97) ~[flink-dist_2.11-1.13.6.jar:1.13.6] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:760) ~[?:1.8.0_181] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:736) ~[?:1.8.0_181] at java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:474) ~[?:1.8.0_181] at java.util.concurrent.CompletableFuture$AsyncSupply.run(CompletableFuture.java:1595) ~[?:1.8.0_181] at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) ~[?:1.8.0_181] at java.util.concurrent.FutureTask.run(FutureTask.java:266) ~[?:1.8.0_181] at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180) ~[?:1.8.0_181] at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293) ~[?:1.8.0_181] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) ~[?:1.8.0_181] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) ~[?:1.8.0_181] at java.lang.Thread.run(Thread.java:748) ~[?:1.8.0_181] Caused by: java.util.concurrent.CompletionException: java.lang.RuntimeException: org.apache.flink.runtime.JobException: Cannot instantiate the coordinator for operator Source: TableSourceScan(table=[[default_catalog, default_database, vehicle_equipment_cdc]], fields=[id, tenant_id, is_active, create_date, modify_date, creator, modifier, terminal_code, video_terminal_code, device_type, device_brand, device_model, device_code, device_name, device_number, industry_code, protocol, is_exist_iot, iscanlock_status, application_id, application_name, application_phone, application_source, iscan_dangerous_operation, terminal_type, vehicle_model, inspection_date]) -> DropUpdateBefore -> NotNullEnforcer(fields=[id]) -> Sink: Sink(table=[default_catalog.default_database.vehicle_equipment_58mysql], fields=[id, tenant_id, is_active, create_date, modify_date, creator, modifier, terminal_code, video_terminal_code, device_type, device_brand, device_model, device_code, device_name, device_number, industry_code, protocol, is_exist_iot, iscanlock_status, application_id, application_name, application_phone, application_source, iscan_dangerous_operation, terminal_type, vehicle_model, inspection_date]) at java.util.concurrent.CompletableFuture.encodeThrowable(CompletableFuture.java:273) ~[?:1.8.0_181] at java.util.concurrent.CompletableFuture.completeThrowable(CompletableFuture.java:280) ~[?:1.8.0_181] at java.util.concurrent.CompletableFuture$AsyncSupply.run(CompletableFuture.java:1592) ~[?:1.8.0_181] at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) ~[?:1.8.0_181] at java.util.concurrent.FutureTask.run(FutureTask.java:266) ~[?:1.8.0_181] at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180) ~[?:1.8.0_181] at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293) ~[?:1.8.0_181] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) ~[?:1.8.0_181] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) ~[?:1.8.0_181] at java.lang.Thread.run(Thread.java:748) ~[?:1.8.0_181] Caused by: java.lang.RuntimeException: org.apache.flink.runtime.JobException: Cannot instantiate the coordinator for operator Source: TableSourceScan(table=[[default_catalog, default_database, vehicle_equipment_cdc]], fields=[id, tenant_id, is_active, create_date, modify_date, creator, modifier, terminal_code, video_terminal_code, device_type, device_brand, device_model, device_code, device_name, device_number, industry_code, protocol, is_exist_iot, iscanlock_status, application_id, application_name, application_phone, application_source, iscan_dangerous_operation, terminal_type, vehicle_model, inspection_date]) -> DropUpdateBefore -> NotNullEnforcer(fields=[id]) -> Sink: Sink(table=[default_catalog.default_database.vehicle_equipment_58mysql], fields=[id, tenant_id, is_active, create_date, modify_date, creator, modifier, terminal_code, video_terminal_code, device_type, device_brand, device_model, device_code, device_name, device_number, industry_code, protocol, is_exist_iot, iscanlock_status, application_id, application_name, application_phone, application_source, iscan_dangerous_operation, terminal_type, vehicle_model, inspection_date]) at org.apache.flink.util.ExceptionUtils.rethrow(ExceptionUtils.java:316) ~[flink-dist_2.11-1.13.6.jar:1.13.6] at org.apache.flink.util.function.FunctionUtils.lambda$uncheckedSupplier$4(FunctionUtils.java:114) ~[flink-dist_2.11-1.13.6.jar:1.13.6] at java.util.concurrent.CompletableFuture$AsyncSupply.run(CompletableFuture.java:1590) ~[?:1.8.0_181] at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) ~[?:1.8.0_181] at java.util.concurrent.FutureTask.run(FutureTask.java:266) ~[?:1.8.0_181] at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180) ~[?:1.8.0_181] at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293) ~[?:1.8.0_181] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) ~[?:1.8.0_181] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) ~[?:1.8.0_181] at java.lang.Thread.run(Thread.java:748) ~[?:1.8.0_181] Caused by: org.apache.flink.runtime.JobException: Cannot instantiate the coordinator for operator Source: TableSourceScan(table=[[default_catalog, default_database, vehicle_equipment_cdc]], fields=[id, tenant_id, is_active, create_date, modify_date, creator, modifier, terminal_code, video_terminal_code, device_type, device_brand, device_model, device_code, device_name, device_number, industry_code, protocol, is_exist_iot, iscanlock_status, application_id, application_name, application_phone, application_source, iscan_dangerous_operation, terminal_type, vehicle_model, inspection_date]) -> DropUpdateBefore -> NotNullEnforcer(fields=[id]) -> Sink: Sink(table=[default_catalog.default_database.vehicle_equipment_58mysql], fields=[id, tenant_id, is_active, create_date, modify_date, creator, modifier, terminal_code, video_terminal_code, device_type, device_brand, device_model, device_code, device_name, device_number, industry_code, protocol, is_exist_iot, iscanlock_status, application_id, application_name, application_phone, application_source, iscan_dangerous_operation, terminal_type, vehicle_model, inspection_date]) at org.apache.flink.runtime.executiongraph.ExecutionJobVertex.(ExecutionJobVertex.java:217) ~[flink-dist_2.11-1.13.6.jar:1.13.6] at org.apache.flink.runtime.executiongraph.DefaultExecutionGraph.attachJobGraph(DefaultExecutionGraph.java:792) ~[flink-dist_2.11-1.13.6.jar:1.13.6] at org.apache.flink.runtime.executiongraph.DefaultExecutionGraphBuilder.buildGraph(DefaultExecutionGraphBuilder.java:196) ~[flink-dist_2.11-1.13.6.jar:1.13.6] at org.apache.flink.runtime.scheduler.DefaultExecutionGraphFactory.createAndRestoreExecutionGraph(DefaultExecutionGraphFactory.java:107) ~[flink-dist_2.11-1.13.6.jar:1.13.6] at org.apache.flink.runtime.scheduler.SchedulerBase.createAndRestoreExecutionGraph(SchedulerBase.java:342) ~[flink-dist_2.11-1.13.6.jar:1.13.6] at org.apache.flink.runtime.scheduler.SchedulerBase.(SchedulerBase.java:190) ~[flink-dist_2.11-1.13.6.jar:1.13.6] at org.apache.flink.runtime.scheduler.DefaultScheduler.(DefaultScheduler.java:122) ~[flink-dist_2.11-1.13.6.jar:1.13.6] at org.apache.flink.runtime.scheduler.DefaultSchedulerFactory.createInstance(DefaultSchedulerFactory.java:132) ~[flink-dist_2.11-1.13.6.jar:1.13.6] at org.apache.flink.runtime.jobmaster.DefaultSlotPoolServiceSchedulerFactory.createScheduler(DefaultSlotPoolServiceSchedulerFactory.java:110) ~[flink-dist_2.11-1.13.6.jar:1.13.6] at org.apache.flink.runtime.jobmaster.JobMaster.createScheduler(JobMaster.java:340) ~[flink-dist_2.11-1.13.6.jar:1.13.6] at org.apache.flink.runtime.jobmaster.JobMaster.(JobMaster.java:317) ~[flink-dist_2.11-1.13.6.jar:1.13.6] at org.apache.flink.runtime.jobmaster.factories.DefaultJobMasterServiceFactory.internalCreateJobMasterService(DefaultJobMasterServiceFactory.java:107) ~[flink-dist_2.11-1.13.6.jar:1.13.6] at org.apache.flink.runtime.jobmaster.factories.DefaultJobMasterServiceFactory.lambda$createJobMasterService$0(DefaultJobMasterServiceFactory.java:95) ~[flink-dist_2.11-1.13.6.jar:1.13.6] at org.apache.flink.util.function.FunctionUtils.lambda$uncheckedSupplier$4(FunctionUtils.java:112) ~[flink-dist_2.11-1.13.6.jar:1.13.6] at java.util.concurrent.CompletableFuture$AsyncSupply.run(CompletableFuture.java:1590) ~[?:1.8.0_181] at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) ~[?:1.8.0_181] at java.util.concurrent.FutureTask.run(FutureTask.java:266) ~[?:1.8.0_181] at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180) ~[?:1.8.0_181] at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293) ~[?:1.8.0_181] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) ~[?:1.8.0_181] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) ~[?:1.8.0_181] at java.lang.Thread.run(Thread.java:748) ~[?:1.8.0_181] Caused by: java.lang.ClassCastException: cannot assign instance of com.fasterxml.jackson.databind.util.LRUMap to field com.fasterxml.jackson.databind.deser.DeserializerCache._cachedDeserializers of type java.util.concurrent.ConcurrentHashMap in instance of com.fasterxml.jackson.databind.deser.DeserializerCache at java.io.ObjectStreamClass$FieldReflector.setObjFieldValues(ObjectStreamClass.java:2287) ~[?:1.8.0_181] at java.io.ObjectStreamClass.setObjFieldValues(ObjectStreamClass.java:1417) ~[?:1.8.0_181] at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2293) ~[?:1.8.0_181] at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2211) ~[?:1.8.0_181] at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2069) ~[?:1.8.0_181] at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1573) ~[?:1.8.0_181] at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2287) ~[?:1.8.0_181] at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2211) ~[?:1.8.0_181] at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2069) ~[?:1.8.0_181] at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1573) ~[?:1.8.0_181] at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2287) ~[?:1.8.0_181] at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2211) ~[?:1.8.0_181] at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2069) ~[?:1.8.0_181] at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1573) ~[?:1.8.0_181] at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2287) ~[?:1.8.0_181] at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2211) ~[?:1.8.0_181] at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2069) ~[?:1.8.0_181] at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1573) ~[?:1.8.0_181] at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2287) ~[?:1.8.0_181] at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2211) ~[?:1.8.0_181] at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2069) ~[?:1.8.0_181] at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1573) ~[?:1.8.0_181] at java.io.ObjectInputStream.readArray(ObjectInputStream.java:1975) ~[?:1.8.0_181] at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1567) ~[?:1.8.0_181] at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2287) ~[?:1.8.0_181] at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2211) ~[?:1.8.0_181] at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2069) ~[?:1.8.0_181] at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1573) ~[?:1.8.0_181] at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2287) ~[?:1.8.0_181] at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2211) ~[?:1.8.0_181] at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2069) ~[?:1.8.0_181] at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1573) ~[?:1.8.0_181] at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2287) ~[?:1.8.0_181] at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2211) ~[?:1.8.0_181] at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2069) ~[?:1.8.0_181] at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1573) ~[?:1.8.0_181] at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2287) ~[?:1.8.0_181] at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2211) ~[?:1.8.0_181] at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2069) ~[?:1.8.0_181] at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1573) ~[?:1.8.0_181] at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2287) ~[?:1.8.0_181] at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2211) ~[?:1.8.0_181] at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2069) ~[?:1.8.0_181] at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1573) ~[?:1.8.0_181] at java.io.ObjectInputStream.readObject(ObjectInputStream.java:431) ~[?:1.8.0_181] at org.apache.flink.util.InstantiationUtil.deserializeObject(InstantiationUtil.java:615) ~[flink-dist_2.11-1.13.6.jar:1.13.6] at org.apache.flink.util.InstantiationUtil.deserializeObject(InstantiationUtil.java:600) ~[flink-dist_2.11-1.13.6.jar:1.13.6] at org.apache.flink.util.InstantiationUtil.deserializeObject(InstantiationUtil.java:587) ~[flink-dist_2.11-1.13.6.jar:1.13.6] at org.apache.flink.util.SerializedValue.deserializeValue(SerializedValue.java:67) ~[flink-dist_2.11-1.13.6.jar:1.13.6] at org.apache.flink.runtime.operators.coordination.OperatorCoordinatorHolder.create(OperatorCoordinatorHolder.java:431) ~[flink-dist_2.11-1.13.6.jar:1.13.6] at org.apache.flink.runtime.executiongraph.ExecutionJobVertex.(ExecutionJobVertex.java:211) ~[flink-dist_2.11-1.13.6.jar:1.13.6] at org.apache.flink.runtime.executiongraph.DefaultExecutionGraph.attachJobGraph(DefaultExecutionGraph.java:792) ~[flink-dist_2.11-1.13.6.jar:1.13.6] at org.apache.flink.runtime.executiongraph.DefaultExecutionGraphBuilder.buildGraph(DefaultExecutionGraphBuilder.java:196) ~[flink-dist_2.11-1.13.6.jar:1.13.6] at org.apache.flink.runtime.scheduler.DefaultExecutionGraphFactory.createAndRestoreExecutionGraph(DefaultExecutionGraphFactory.java:107) ~[flink-dist_2.11-1.13.6.jar:1.13.6] at org.apache.flink.runtime.scheduler.SchedulerBase.createAndRestoreExecutionGraph(SchedulerBase.java:342) ~[flink-dist_2.11-1.13.6.jar:1.13.6] at org.apache.flink.runtime.scheduler.SchedulerBase.(SchedulerBase.java:190) ~[flink-dist_2.11-1.13.6.jar:1.13.6] at org.apache.flink.runtime.scheduler.DefaultScheduler.(DefaultScheduler.java:122) ~[flink-dist_2.11-1.13.6.jar:1.13.6] at org.apache.flink.runtime.scheduler.DefaultSchedulerFactory.createInstance(DefaultSchedulerFactory.java:132) ~[flink-dist_2.11-1.13.6.jar:1.13.6] at org.apache.flink.runtime.jobmaster.DefaultSlotPoolServiceSchedulerFactory.createScheduler(DefaultSlotPoolServiceSchedulerFactory.java:110) ~[flink-dist_2.11-1.13.6.jar:1.13.6] at org.apache.flink.runtime.jobmaster.JobMaster.createScheduler(JobMaster.java:340) ~[flink-dist_2.11-1.13.6.jar:1.13.6] at org.apache.flink.runtime.jobmaster.JobMaster.(JobMaster.java:317) ~[flink-dist_2.11-1.13.6.jar:1.13.6] at org.apache.flink.runtime.jobmaster.factories.DefaultJobMasterServiceFactory.internalCreateJobMasterService(DefaultJobMasterServiceFactory.java:107) ~[flink-dist_2.11-1.13.6.jar:1.13.6] at org.apache.flink.runtime.jobmaster.factories.DefaultJobMasterServiceFactory.lambda$createJobMasterService$0(DefaultJobMasterServiceFactory.java:95) ~[flink-dist_2.11-1.13.6.jar:1.13.6] at org.apache.flink.util.function.FunctionUtils.lambda$uncheckedSupplier$4(FunctionUtils.java:112) ~[flink-dist_2.11-1.13.6.jar:1.13.6] at java.util.concurrent.CompletableFuture$AsyncSupply.run(CompletableFuture.java:1590) ~[?:1.8.0_181] at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) ~[?:1.8.0_181] at java.util.concurrent.FutureTask.run(FutureTask.java:266) ~[?:1.8.0_181] at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180) ~[?:1.8.0_181] at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293) ~[?:1.8.0_181] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) ~[?:1.8.0_181] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) ~[?:1.8.0_181] at java.lang.Thread.run(Thread.java:748) ~[?:1.8.0_181]

    opened by txl2017 8
  • [refactor-module][*,!web]refactor module name to dinky &&  rename package name to org.dinky

    [refactor-module][*,!web]refactor module name to dinky && rename package name to org.dinky

    …kage name to org.dinky

    Purpose of the pull request

    refactor module name to dinky && rename package name to org.dinky

    Brief change log

    Verify this pull request

    This pull request is code cleanup without any test coverage.

    (or)

    This pull request is already covered by existing tests, such as (please describe tests).

    (or)

    This change added tests and can be verified as follows:

    opened by zhu-mingye 0
  • Using yarn application mode to submit job reported null pointer exception

    Using yarn application mode to submit job reported null pointer exception

    Search before asking

    • [X] I had searched in the issues and found no similar issues.

    What happened

    当其他配置中出现空的配置项时,使用yarn application模式提交任务会报空指针异常错误 image

    What you expected to happen

    ae4f484cb8daa446198e54736931741f

    How to reproduce

    Add an empty configuration item to other configuration items, and submit the job according to the yarn application mode

    Anything else

    No response

    Version

    dev

    Are you willing to submit PR?

    • [ ] Yes I am willing to submit a PR!

    Code of Conduct

    bug 
    opened by gaoxianzhu 0
  • [Bug] [Website] The url of quick start not found

    [Bug] [Website] The url of quick start not found

    Search before asking

    • [X] I had searched in the issues and found no similar issues.

    What happened

    image image

    What you expected to happen

    empty

    How to reproduce

    empty

    Anything else

    empty

    Version

    dev

    Are you willing to submit PR?

    • [ ] Yes I am willing to submit a PR!

    Code of Conduct

    bug 
    opened by jieguangzhou 0
  • [Bug] 无主键的表, 怎么同步?

    [Bug] 无主键的表, 怎么同步?

    Search before asking

    • [X] I had searched in the issues and found no similar issues.

    What happened

    在测试cdc单表及cdcsource整库同步时, 有遇到有些表是没有设置主键的, 这时候同步就会报错。

    这个问题实际上看起来有点不合理, 因为没有主键的表,要实现cdc方式的同步。但是有些库,特别是bi库,会大量存在无主键,

    但多字段可确定唯一记录的场景(不会在建表的时候就把复合主键建上)

    大家是怎么解决的?因为有时候是不方便给这些表去加主键的。

    这个后续会有解决方案吗?

    What you expected to happen

    希望无主键的表, 也能实现同步

    How to reproduce

    Anything else

    No response

    Version

    0.6.7

    Are you willing to submit PR?

    • [ ] Yes I am willing to submit a PR!

    Code of Conduct

    bug 
    opened by cooltnt 3
  • [Feature][Flink Settings] remove flink jar settings to flink cluster manage

    [Feature][Flink Settings] remove flink jar settings to flink cluster manage

    Search before asking

    • [X] I had searched in the issues and found no similar feature requirement.

    Description

    For the application mode under multi cluster, you need to adjust the jar file path multiple times(对于多集群下的application模式,需要多次调整jar文件路径) image

    Use case

    It is recommended to put it here. Each cluster corresponds to the relevant jar configuration file(推荐放入这里面,每一个集群对应相关的jar配置文件) image

    Related issues

    No response

    Are you willing to submit a PR?

    • [ ] Yes I am willing to submit a PR!

    Code of Conduct

    new feature 
    opened by zackyoungh 0
  • [Bug] [Gateway] 不同yarn集群任务同时提交yarnClient复用导致异常

    [Bug] [Gateway] 不同yarn集群任务同时提交yarnClient复用导致异常

    Search before asking

    • [X] I had searched in the issues and found no similar issues.

    What happened

    现状:不同租户配置不同kerberos认证yarn集群的flink on yarn集群配置,多租户同时提交作业的时候会异常

    尝试分析:submitJobGraph的时候,先判断yarnClient是不是null,如果不是null就不会进行init,直接复用了

    似乎现在的设计,并不能很好的同时作为多套不同yarn集群的客户端,同时进行作业提交,想请教看看针对【多套不同的kerberos认证集群,或者同一套集群的多个不同集群用户】这样的场景,能否做到不同租户同时提交作业

    What you expected to happen

    多套不同的集群同一时间提交作业,相互之间不冲突

    How to reproduce

    同时向不同集群配置提交作业

    Anything else

    No response

    Version

    dev

    Are you willing to submit PR?

    • [ ] Yes I am willing to submit a PR!

    Code of Conduct

    bug 
    opened by nylqd 0
Releases(v0.7.0)
  • v0.7.0(Nov 24, 2022)

    Feature: Supports Apache Flink 1.16.0 Add build java udf jar Support Flink session mode to automatically load udf jars Support Flink per-job mode to automatically load udf jars Support Flink application mode to automatically load udf jars Support python udf online development Support scala udf online development Support custom k8sapp submit at studio Add file upload in flinkjar task Add Column Lineage By Logic Plan in Flink all version Flink JDBC support data filter Flink JDBC scan partition support datetime Add multi tenant management Add select tenant at login Add dolphinScheduler auto create task Add system log console Add execution progress console Add Flink UDF template Add Presto data source Add the function of deleting job tree directory level Add new sink connector 'datastream-doris-ext' for CDCSOURCE to support metadata writing DataStudio metadata add refesh button Add database copying Add frontend internationalization Add backend internationalization Add K8S auto deploy application Add local environment with web UI Add FlinkSQL built-in date global variable Add Kerberos Verification on session model Add CDCSOURCE supports Doris schema evolution Add user can change password Add kill cluster Add deploy session cluster by cluster configuration Add jmx monitoring Add clear console

    Fix: Resolve the exception when starting from the specified savepoint Fix StarRocks databases display Fix query exceptions caused by the system's failure to automatically clean up the selected table information when switching data sources Fix Exception in obtaining SQLDDL of view Fix invalid oracle validationQuery Fix failed to get schema of PG database metadata Fix kafka sink properties not enable Fix jobConfig useAutoCancel parameter pass wrong value Fix job monitoring bugs caused by multi tenancy Fix the deletion of a cluster instance causes existing tasks to fail to stop Fix application mode missing database variable Fix continuous click task item will open multiply tabs problem Fix cdcsource kafka product transactionalIdPrefix disable Fix cluster display nothing when alias is not set Fix task version query error Fix The status of per-job and application always is unknown when the job completes Fix guide page link error Fix open API does not have tenant Fix switch editor tab doesn't work

    Optimization & Improve: Add parameter configuration of MySQL CDC Optimizate datastream kafka-json and datastream starrocks Support metadata cache to memory or redis Add uuid after doris label prefix Optimize tenant selection Change resource center to authentication center Add spotless plugin Optimize SQL files and differentiate versions by directory Improve the automatic creation of MySQL tables Improve postgres metadata information Improve the generation of postgres table building statements Optimizate Flink Oracle Connector Optimizate maven assembly and profile Compatible java 11 Remove duplicated init of datasource Upgrade mysql-connector-java to 8.0.28 Upgrade Flink 1.14.5 to 1.14.6 Upgrade Guava and Lombok Version Upgrade jackson and sa-token version Optimize lineage to support watermark and udf and LOCALTIMESTAMP and cep and ROW_NUMBER The authentication center is not visible under non-admin users

    Document: Optimizate readme.md Update website Optimize deploy doc Add flink metrics monitor and optimize deploy document

    Contributors: @admxj @aiwenmo @billy-xing @boolean-dev @chengchuen @czy868 @dzygcc @Forus0322 @gujincheng @HamaWhiteGG @hxp0618 @ikiler @leechor @lewnn @jinyanhui2008 @nylqd @ren-jq101 @rookiegao @siriume @tgluon @wellCh4n @wfmh @wmtbnbo @wuzhenhua01 @zackyoungh @zhongjingq @zhu-mingye @ziqiang-wang @zq0757 @Zzih @zzzzzzzs

    Source code(tar.gz)
    Source code(zip)
    dlink-release-0.7.0.tar.gz(203.54 MB)
  • v0.6.7(Sep 6, 2022)

    Feature: [Feature-775][admin] Add tenant implementation [Feature-789][admin,web] One click online and offline operation [Feature-812][admin,web] Add FragmentVariable manager && resourcecenter page [Feature] Added NameSpaceForm PasswordForm [Feature-823][web] Render multi tenant forms on login [Feature-823][web] Multi tenant front end implementation [Feature-868][common] Added github workflow to check the checkstyle, test and build of each PR [Feature-861][metadata] Add alibaba druid connection pooling to fix jdbc multi-connection problem [Feature-890][admin,web] Realization of user empowerment role function [Feature-907][pom] Change Flink base version into 1.14 [Feature-905][admin,web] Implementation of global variable management [Feature][client] Add SqlServer CDCSOURCE [Feature-915][admin,core] Add global variables takes effect in flinksql [Feature-923][*] Build column lineage base on flink logical plan [Feature][client] Add posgresql CDCSOURCE [Feature][test] Modify checkstyle to be a required item [Feature] Add swagger api doc [Feature] CDCSOURCE supports multi-sink [feature][admin] File upload [Feature-987][admin] ClusterConfig and jar add upload file [Feature-989][metadata] Add StarRocks datasource [Feature-946][alert] Alarm after task monitoring retry [Feature][web] Add data development task information log details button

    Fix: [Fix][admin] Modify the problem of repeated judgment of task monitoring code [Fix] Fix mail alert params bug [Fix-804][admin] GetJobInfoDetail and refreshJobInfoDetail maybe return a error result [Fix-818][connectors] Fix dlink-connector-doris-1.13 when the data is flushed with an exception and no new data is entered subsequently [Fix-833][client] Error with SQLSinkBuilder.buildRow [Fix] Fix execute error by flink 1.14.4 [Fix] Fix cluster submission taskId is empty [Fix] Fix yarn per job can't release resources [Fix] Fix multi tenant add role and delete role [Fix] Fix dlink-conector-pulsar-1.14 can't find SubscriptionType [Fix] Fix the jackjson problem in flink1.14 savepoint [Fix-803][client] Fix TypeConvert-ColumnType Enumeration usage error [Fix] The full database sync kafkasinkbuilder does not implement serialization and cause an error [Fix-840][web] fix registration document type filter condition error [Fix] Fix yarn perjob/application and k8s application cluster configuration [Fix] Fix k8s application submit error and add Maximum waiting time [Fix] Fix the banner of dlink admin app [Fix][function] Fix udf and udtaf in Flink 1.14 [Fix][app] Fix yarn application task separator error [Fix][admin] Failed to save job after repairing rename job [Fix][web] No content when repairing the second pop-up of the submitted history

    Optimize: [Optimization-764][dlink-web,docs] Optimization dlink-web,docs [Optimization-780][admin] Optimization The task version was not deleted after deleting the task [Optimization-781][web] Optimization Overflow in job list tree after importing file [Optimization-801][web] Optimization StudioConsole's StudioProcess [Optimization-773][dlink-client] Optmizatiion cdcsource filter to process [Optimization-809][Common] Added ignore of logs folder in git ignore file. [Optimization-810][Common] Added Maven Wrapper [Optimization-816][Common] Fix Chinese README link error and add English README [Optimization] Optimization of multi tenant && Optimize form rendering [Optimization] Remove sensitive info for more api [Optimization][admin] Optimized for multi-tenant [Optimization-819][client] CDCSOURCE with timestamp and timezone [Optimization-849][client,executor] Replace sql separator and change default sql separator into ;\n [Optimization][Style] Optimization code style import order [Optimization][Git] Added .DS_Store git file ignore [Optimization] Optimization multi tenant of delete roles && optimization multi tenant web render [Optimization] Optimize user associated character rendering [Optimization][Style] Added dlink-admin module code style [Optimization][Style] Added dlink-alert module code style [Optimization][Style] Added dlink-common module code style [Optimization][Style] Added dlink-catalog module code style [Optimization][Style] Added dlink-client module code style [Optimization][Style] Added dlink-app module code style [Optimization][metadata] Optimized connection pooling and connection creation [Optimization][Style] Added dlink-connectors module code style [Optimization][Style] Added dlink-core module code style [Optimization][Style] Added dlink-daemon module code style [Optimization][Style] Added dlink-executor module code style [Optimization][Style] Added dlink-function and dlink-gateway module code style [Optimization][Style] Added dlink-metadata module code style [Optimization][license] Add a license description to the pom file [Optimization-932][pom] Optimizate package and auto.sh by loading classpaths [Optimization] dlink-client-hadoop add ServicesResourceTransformer [Optimization-943][pom] Optimizate config and static dir packaging [Optimization][style] Configure global checkstyle validation [Optimization][client] Add sqlserver date type convert [Optimization][metadata] Optimizate postgresql schema_name query [Optimization-981][metadata] Doris support more syntax [Optimization-983][client] Optimizate Doris datastream sink [Optimization][web] Optimizate Some problems in front-end and some tips [Optimization-882][web] Collapse all directories by default on datastudio's directory panel [Optimization-881][datastudio] Optimizate FlinkSql explain exception message [Optimization-1014][client] Optimizate Doris sink and type convert and upgrade flink to 1.15.2 [Optimization][flink] Upgrade Flink 1.15 to 1.15.2 [Optimization][metadata] Optimize SqlServer field type query

    Document: [Document][docs] Migrate dlink docs from dinky-website to docs [Document] Dlink add Flink1.15 docs [Document-769][docs] The whole database synchronization document repair [Document-766][docs] Add import and export job docs [Document-793][doc] Optimization some docs [Document-835][doc] Update the home page and basic information of the document [Document-832][doc] Add Practice Sharing of FlinkSQL Extending Redis [Document][docs] Optimizing Deployment Documents

    Contributor: @aiwenmo @byd-android-2017 @chrofram @complone @dylenWu @dzygcc @gaogao110 @hxp0618 @hzymarine @ikiler @leo65535 @lnnlab @JanCong @jinyanhui2008 @mengyejiang @mydq @rafaelxie @sunyanqing01 @tgluon @Toms1999 @walkhan @wangzhonggui @XiaoF-Liu @zackyoungh @zhu-mingye @zhuangchong @zhujunjieit @ziqiang-wang

    Source code(tar.gz)
    Source code(zip)
    dlink-release-0.6.7.tar.gz(110.86 MB)
  • v0.6.6(Jul 23, 2022)

    Feature: [Feature-685][web,admin] add job historyversion list of DevOps [Feature-692][web] Add history version comparison in data development [Feature][catalog] Add Flink MySql Catalog [Feature-704][catalog,admin] Add default mysql catalog in FlinkSQLEnv [Feature][connector] Added version 1.13 Doris connection is hidden by _DORIS_DELETE [Feature][connector] Add dlink-connector-pulsar [Feature][web,admin] Select checkpoint restart [Future][flink] Update flink1.15 to flink1.15.1 [Feature-728][admin,web] Add meta store feature [Feature-733][admin,web,client] Add Flink meta store info and column details [Feature-738][*] Add and update Licenses [Feature-750][admin,web] Add import and export tasks json

    Fix: [Fix][connctor] flink-connector-phoenix and Update PhoenixDynamicTableFactory [Fix][admin] Fix job instance cause OOM [Fix][bug] Update flink version and fix Flink and CDC version compatibility bug [Fix-696][web] Fix the problem of time-consuming parsing errors of task instances [Fix-709] [catalog] Fix catalog SPI bug and sql bug [Fix-719][web] Fix checkpoint has error of devops && add savepoints info [Fix-714][client] Catch translateToPlan exception in SQLSinkBuilder [Fix-736][client] An exception occurred during the packaging process [Fix-738][admin] Fix CopyrightUtil [Fix-741][app] Fix application mode submit failed [Fix-745][admin] Fix alert instance delete bug [Fix][admin] Allow circular references config [Fix-750][admin] Delete unreferenced classes [Fix][admin] Fix jobhistory field null

    Optimize: [Improvement][jobplan] Prompt returned when failed to obtain the optimized flinksql execution graph [Optimization][admin] Move sensitive info(password) from api [Optimization-713][dlink-admin] Fix the problem of incorrect display information of deleted user login [Optimization-719][web] Optimization checkpoints page

    Contributors: @a279780399 @aiwenmo @Arnu- @darren-da @dzygcc @Forus0322 @gaogao110 @JPengCheng @mydq @syyangs799 @wmtbnbo @zhu-mingye

    Source code(tar.gz)
    Source code(zip)
    dlink-release-0.6.6.tar.gz(98.30 MB)
  • v0.6.5(Jul 3, 2022)

    Feature: [Feature][connector]Add flink connecotr phoenix 1.14 [Feature-609][metadata]Supports generating FlinkSQL with null and not null through metadata [Feature-635][admin]Add job instance info api [Feature-644][admin,web]Add Flink snapshot info and JobManager config in DevOps [Feature-649][admin,web]Add TaskManager info in DevOps [Feature-654][web]Add task info tab [Feature-649][admin,web]Add TaskManager table and form in DevOps [Feature-662][admin,web]Add task version history [Feature-661][web]Add job checkpoint history [Feature-666][client]Capture column type conversion exception details in CDCSOURCE [Feature-668][web]Add task manager info [Feature-663][admin,web]Add task copy feature Fix: [Fix-577][web]Fix datastudio datasource alias is "" [Fix-574][web]Fix datastudio tabs close_other can not close first tab [Fix-576][admin]Fix the exception caused by metadata switching [Fix][gateway]Fix flinkLibPath in K8S configuration is null and cluster-id in test k8s-cluster-config is null [Fix][common]Fix semicolon exception at the end of FlinkSql [Fix][core]Fix K8S cluster configuration can't get custom configuration [Fix-596][admin]Fix NPE on running yarn per-job app [Fix-603][admin]Fix task information will be cleared when refreshing tasks that have lost connection [Fix-140][core]Fix 'table.local-time-zone' parameter is invalid [Fix-607][dlink-alert-email]Fix ClassNotFoundException: javax.mail.Address [Fix-621][core]Fix can't explain 'show datatbases' [Fix][core]Fix setParentId method space determination error [Fix-608][client]Fix can't create a flink table when table name is a sql reserved word in CDCSOURCE [Fix-629][core]Fix lineage of the same field name cannot be correctly resolved [Fix][web]Fix cluster configuration page bug [Fix-638][alert]Fix the email alarm notification cannot customize the nickname [Fix][connector]Fix dlink-connector-phoenix-1.14 build error [Fix-670][metadata]Fix Oracle column nullable error [Fix-674][client]Support MySQL varbinary and binary in CDCSOURCE Optimization & Improve: [Optimization-575][web]Optimize tree search results highlight & & selected backgroundColor [Optimization-605][web]Optimize the alias is empty in several places [Optimization][client]Optimize explainSqlRecord [Optimization-627][web]Optimization cluster instance page [Optimization-633][web]Optimization lineage refresh [Optimization-641][admin]Optimize job instance api [Optimization-640][alert,web]Optimization all Alert of sendMsg

    Source code(tar.gz)
    Source code(zip)
    dlink-release-0.6.5.tar.gz(92.05 MB)
  • 0.6.4(Jun 5, 2022)

    Feature: [Feature-506][client] The CDCSOURCE table parameter supports line feed and column sort by primary key [Feature-518][client] CDCSOURCE add log [Feature-525][alert,web] Add @mobile mode of dingTalk Alert [Feature-254][web] Add StreamGraph JSON Export function [Feature-534][admin,web] Add task open API page [Feature-356][web] Add datastudio help page [Feature-318][core] Add column lineage from db sql [Feature-545][admin] Add Task Pool to solve frequent database writes [Feature-552][web] Hide export StreamGraphPlan button when not FlinkSQL task [Feature-558][web] add delete button of database Manage [Feature-568][client] Add CDCSOURCE jdbc properties and upgrade flink cdc version

    Fix: [Fix-491][admin,web] Fix refresh the status page with abnormal jitter [Fix-499][connector] Flink oracle connector can't cast CLOB to String [Fix-494][web] Fix the savepoint information does not change with the task information [Fix-503][gateway,client] Compatibility of different versions of the interface ClusterClient [Fix-509][metabase] Fix MySqlTypeConvert precision and scale is null bug [Fix-503][client] Remove initFunctions [Fix-518][client] Fix CDCSOURCE decimal bug [Fix-520][core] Fix get JobPlanInfo failed [Fix-522][client] Fix CDCSOURCE OracleCDC number can't be cast to Long [Fix-527][web] Fix the wechat alarm instance send testMsg error [Fix-529][web] Fix the task cannot be saved after switching tasks [Fix-541][web] Fix the problem that the datasource && metadata does not set alias [Fix-542][web] Fix the name of tab can't be modified when modify task name [Fix-556][adimin,gateway] Fix flinkLibPath in K8S configuration is null [Fix-571] [client] Fix CDCSOURCE String can't be cast to Timestamp

    Optimization & Improve: [Optimization-504][doc] Optimization init sql [Optimization-550][assembly] Optimization package [Optimization-562][web] Remove preset-ui [Optimization-564][metadata] Optimization MySqlTypeConvert

    Source code(tar.gz)
    Source code(zip)
    dlink-release-0.6.4.tar.gz(87.55 MB)
  • 0.6.3(May 9, 2022)

    Feature: [Feature-422][*] CDCSource sync kafka topics [Feature-429][*] OracleCDCSource sync kafka topics [Feature-435][client,executor] CDCSource sync doris [Feature-442][client,executor] CDCSource sync hudi [Feature-445][client] CDCSource sync add sink table-name rule [Feature-447][client] CDCSource sync sql [Feature-451][client] CDCSource sync field type convertion in Flink1.13 [Feature-461][client] CDCSource sync add sink table-name RegExp [Feature-469][client] Add MysqlCDCSource sync extended configuration [Feature-477][client] CDCSOURCE add pkList [Feature-275][client] Add Flink client 1.15 [Feature-488][*] Release v0.6.3

    Fix: [Fix-427][admin] Task status error when task has stoped [Fix-425][metadata] Fix Oracle Unsupported character set [Fix-426][metadata] Fix Clickhouse metadata does not display [Fix-387][web] Metadata switching error [Fix-442][client,executor] Add Hudi SOURCE_AVRO_SCHEMA [Fix-454][metadata] Assign to DriverConfig's name [Fix-424][common] Modify the default value of the sqlSeparator to ';\r\n|;\n' [Fix][app] Batch execution with yarn-app mode using the useBatchModel parameter [Fix-457][web] Fix bug of modifying job name [Fix-472] [client] Mysqlcdc whole database sync Hudi error [Fix-479][docs] Fix some inaccessible documentation links [Fix-484] [core] Fix to job plan info was executed twice

    Optimization & Improve: [Improve-434][admin] The release of Stream [Optimization-439][client] Optimize CDCSource sync doris [Improvement-456][dlink-doc] Fix start and stop conflicts when deploy more instances in same machine [Optimization-459][alert-wechat,web] Optimization wechat Webhook sendMsg title [Improvement-456][dlink-doc] Fix auto.sh LF and disable environment variables

    Source code(tar.gz)
    Source code(zip)
    dlink-release-0.6.3.tar.gz(87.47 MB)
  • 0.6.2(Apr 17, 2022)

    [Feature-312][alert] Add FeiShu alert type [Feature-309][alert] Add Email alert type [Feature-339][admin,web] Add alarm msg sending test [Feature][root] Add docker support [Feature-355][admin,executor] Restore the job from a savepoint path in remote mode [Feature-377][*] Modify version to 0.6.2-SNAPSHOT [Feature-389][client,executor] Add OracleCDCSourceMerge [Feature-401][*] Modify version to 0.6.2

    [Fix-239][core] Use "==" to judge whether the values of referenced variables are equal. Replace with the equals function [Fix-310][gateway] Configure hadoop_conf for Flink [Fix-347][alert] Fix to the latest configuration take effect during the alert test [fix-350][alert] Feishu alert when choose '@all' send msg failed bug [Fix-358][pom] Fix to install [fix-363][metadata-hive] Fix hive getSqlGeneration Exception [Fix-370][executor] Avoid replacement failure due to special characters [Fix-98][admin,web] Fix to show exception [Fix-127][core,executor] Add executeAsync datastream job [Fix-352][admin] Fix to modify cluster configuration [Fix-395][connector] Oracle connector oracle.sql.TIMESTAMP converte java.sql.Timestamp occur cast exception [Fix-402][admin] MybatisPlus PO boolean default false [fix-407][common,function] Use "==" to judge whether the values of referenced variables are equal. [Fix-405][common] Fix the default value of SystemConfiguration sqlSeparator to ';\r\n' [Fix-393][core] Program in loop when offline task failed [Fix-418][admin,core,sql] Fix submit task parameter exception

    [Optimization-367][core] Optimization explainSql return all error [Optimization-371][metadata-hive] Optimization metadata-hive pom [Optimization-373][admin] Add httpclient to admin jar [Optimization-382][alert,metadata] Optimization SPI [Optimization-384][client] Optimization CDCSourceMerge [Optimization-397][daemon] SPI Optimization [Optimization-406][web] FlinkWebUI button Optimization && add flinkwebui button for clusertInstance Manager

    Source code(tar.gz)
    Source code(zip)
    dlink-release-0.6.2.tar.gz(80.92 MB)
  • 0.6.1(Apr 1, 2022)

    1. [Feature] Add issue template
    2. [Fix-214][gateway,common] Failed to submit fink jar when configuration has empty string
    3. [Fix][metadata] MySqlDriver Flink Column Type Conversion error
    4. [Fix-224][metadata-hive] Fix HiveJDBC Multiple SQL Query
    5. [Fix-234][core] Fix to flink jar cannot be monitored
    6. [Feature][connector] Add module dlink-connector-phoenix
    7. [Fix-222][pom] Fix root pom mybatis-plus-boot-starter to latest
    8. [Feature-242][admin] Add open api named savepointTask
    9. [Feature-211][alert-wechat] Add WeChat WebHook AlertType
    10. [Fix-211][web] Fix to alert instance form linkage
    11. [Fix-264][web] Fix the bug that the metadata details cannot be refreshed when you right-click to switch
    12. [Fix-233][user management] Fix the bug that the user password was incorrectly modified
    13. [Fix-220][docs] Modify the encoding format CRLF in the shell file ,named anto, to LF
    14. [Fix-192][web] Add a button to exit full screen
    15. [Fix][web] Fix the bug of beautifying "||" in the editor
    16. [Fix-108][gateway] Fix to cancel per-job bug
    17. [Optimization-285][web] Optimization document manager optimization
    18. [Fix-263][dlink-admin] Fix the bug of abnormal blood relationship analysis
    19. [Fix-299][web] Fix to DingTalk form does not display correctly
    20. [Fix-294][admin] Fix to flink conf json bug
    Source code(tar.gz)
    Source code(zip)
    dlink-release-0.6.1.tar.gz(79.60 MB)
  • 0.6.0(Mar 21, 2022)

    新功能

    • 新增作业目录树关键字搜索框
    • 新增 F2 全屏开发
    • 新增 K8S 集群配置
    • 新增 Doris 数据源注册、元数据、查询和执行
    • 新增 SqlServer 数据源注册、元数据、查询和执行
    • 新增 Oracle 数据源注册、元数据、查询和执行
    • 新增 Phoenix 数据源注册、元数据、查询和执行
    • 新增 Hive 数据源注册、元数据、查询和执行
    • 新增元数据生成 FlinkSQL 和 SQL
    • 新增 CDCSOURCE 多源合并任务语法支持
    • 新增作业生命周期管理
    • 新增 FlinkJar Dialect 的管理
    • 新增 Batch 引擎
    • 新增数据源的连接信息片段机制自动注入
    • 新增用户密码修改
    • 新增报警模块(实例和组)
    • 新增钉钉报警
    • 新增微信企业号报警
    • 新增运维中心任务实例功能
    • 新增运维中心任务实时监控功能
    • 新增运维中心任务监控的 FlinkWebUI、智能停止、SavePoint 等操作
    • 新增运维中心任务监控的作业总览
    • 新增运维中心任务监控的配置信息
    • 新增运维中心任务监控的智能重启和报警推送
    • 新增实时自动告警
    • 新增 Application 模式自增修正 checkpoint 和 savepoint 存储路径
    • 新增作业发布时进行语法校验和逻辑检查
    • 新增作业上下线自动提交和停止任务
    • 新增作业生命周期与任务实例同步联动
    • 新增运维中心的作业实例与历史切换
    • 新增运维中心的异常信息实现
    • 新增运维中心的 FlinkSQL 实现
    • 新增运维中心的报警记录实现
    • 新增运维中心血缘分析——字段级
    • 新增作业剪切和粘贴
    • 新增实时任务监控容错机制

    修复和优化

    • 升级 SpringBoot 至 2.6.3
    • 升级 Flink 1.13.5 至 1.13.6
    • 优化 sql 美化
    • 优化默认启用数据预览等
    • 修复前端 state 赋值 bug
    • 修复异常预览内容溢出 bug
    • 修复数据预览特殊条件下无法获取数据的 bug
    • 优化 SQL 编辑器性能
    • 修复全屏开发退出后 sql 不同步
    • 优化作业配置查看及全屏开发按钮
    • 优化异常日志的捕获、反馈与持久化
    • 优化元数据数据源的的类型转换和连接管理
    • 优化 K8S Application 提交配置
    • 优化 PerJob 和 Application 作业的 JID 提交检测
    • 修复集群配置参数项为空时无法正常提交 perjob 任务的bug
    • 优化语法检测建议的结果提示
    • 修复 Oracle 无法正确获取元数据的 bug
    • 修复报警组刷新当前页面时无法正常显示下拉报警实例
    • 优化当提交作业无法获取 JID 时变为提交失败
    • 优化 IDEA 调试时的依赖配置
    • 修复用户未登录时后台报错及鉴权问题
    • 修复用户逻辑删除 bug
    • 修复 kubernetes 集群配置相关显示 bug
    • 优化 Studio 血缘分析为字段级
    • 修复 Doris 无法获取到列的主键信息
    Source code(tar.gz)
    Source code(zip)
    dlink-release-0.6.0.tar.gz(76.90 MB)
  • 0.5.1(Jan 24, 2022)

    1. 修复 SHOW 和 DESC 的查询预览失效
    2. 修复 作业非remote作业进行remote语法校验的问题
    3. 增加 dlink-client-hadoop 版本定制依赖
    4. 优化 菜单
    5. 优化 pom及升级log4j至最新
    6. 修复 前端多处bug
    7. 新增 F2 全屏开发
    8. 升级 SpringBoot 至 2.6.3
    9. 优化 日志依赖
    10. 修复 前端 state 赋值 bug
    11. 修复 异常预览内容溢出 bug
    12. 修复 数据预览特殊条件下无法获取数据的 bug
    13. 优化 SQL编辑器性能
    14. 修复 全屏开发退出后 sql 不同步
    15. 升级 Flink 1.14.2 到 1.14.3
    16. 修复 Flink 1.14 提交任务报错缺类 bug
    17. 优化 作业配置查看及全屏开发按钮
    18. 新增 K8S集群配置
    Source code(tar.gz)
    Source code(zip)
    dlink-release-0.5.1.tar.gz(56.49 MB)
  • 0.5.0(Jan 16, 2022)

    1. 支持 Kubernetes Session 和 Application 模式提交任务
    2. 新增 UDF Java方言的Local模式的在线编写、调试、动态加载
    3. 新增 FlinkSQL 执行环境方言及其应用功能
    4. 新增 BI选项卡的折线图、条形图、饼图
    5. 新增 元数据查看表和字段信息
    6. 新增 ChangLog 和 Table 的查询及自动停止实现
    7. 新增 Mysql,Oracle,PostGreSql,ClickHouse,Doris,Java 方言
    8. 新增 OpenAPI 的执行sql、校验sql、获取计划图、获取StreamGraph、获取预览数据、执行Jar、停止、SavePoint接口
    9. 新增 数据源的 Sql 作业语法校验和语句执行
    10. 新增 快捷键保存、校验、美化等
    11. 新增 引导页
    12. 新增 JobPlanGraph 展示
    13. 新增 SQLServer Jdbc Connector 的实现
    14. 新增 Local 的运行模式选择与分类
    15. 新增 SavePoint 的 restAPI 实现
    16. 新增 编辑器选项卡右键关闭其他和关闭所有
    17. 新增 FlinkSQL 及 SQL 导出
    18. 新增 集群与数据源的 Studio 管理交互
    19. 新增 Yarn 的 Kerboros 验证
    20. 建立 官网文档
    21. 修改 项目名为 Dinky 以及图标
    22. 优化 所有模式的所有功能的执行逻辑
    23. 升级 各版本 Flink 依赖至最新版本以解决核弹问题
    24. 修复 编辑集群配置测试后保存会新建的bug
    25. 修复 登录页报错弹框
    26. 修复 Yarn Application 解析数组异常问题
    27. 修复 自定义Jar配置为空会导致异常的bug
    28. 修复 任务提交失败时注册集群报错的bug
    29. 修复 set在perjob和application模式不生效的问题
    30. 修复 perjob和application模式的任务名无法自定义的问题
    31. 修复 set 语法在1.11和1.12的兼容问题
    32. 修复 血缘分析图由于前端依赖无法正常加载的问题
    Source code(tar.gz)
    Source code(zip)
    dlink-release-0.5.0.tar.gz(65.60 MB)
  • 0.4.0(Dec 3, 2021)

    1. 支持 standalone,yarn-session,yarn-per-job,yarn-application 多种模式的 FlinkSQL 执行与作业运维;
    2. 支持 yarn-application 的 User Jar 提交;
    3. 支持 SavePoint 管理与恢复;
    4. 支持 SELECT 和 SHOW 语句预览;
    5. 支持SQL片段、AGGTABLE、语句集等语法增强;
    6. 支持依据文档模块或SQL上下文的自动提示与补全;
    7. 支持 sql-client 的所有语法;
    8. 支持 FlinkSQL 语法及逻辑校验;
    9. 支持表级血缘分析;
    10. 支持 StreamGraph 计划图预览;
    11. 支持共享会话模式及其 Catalog 管理;
    12. 支持数据源管理及元数据查询;
    13. 支持集群及配置管理;
    14. 支持文档管理;
    15. 支持 Jar 管理;
    16. 支持用户管理;
    17. 支持系统配置。
    Source code(tar.gz)
    Source code(zip)
    dlink-release-0.4.0.tar.gz(39.52 MB)
Apache ORC - the smallest, fastest columnar storage for Hadoop workloads

Apache ORC ORC is a self-describing type-aware columnar file format designed for Hadoop workloads. It is optimized for large streaming reads, but with

The Apache Software Foundation 576 Jan 2, 2023
A big, fast and persistent queue based on memory mapped file.

Big Queue A big, fast and persistent queue based on memory mapped file. Notice, bigqueue is just a standalone library, for a high-throughput, persiste

bulldog 520 Dec 30, 2022
A lightning fast, transactional, file-based FIFO for Android and Java.

Tape by Square, Inc. Tape is a collection of queue-related classes for Android and Java. QueueFile is a lightning-fast, transactional, file-based FIFO

Square 2.4k Dec 30, 2022
Facebook Clone created using java based on Graph data Structure

Facebook Clone Facebook Clone created using java based on Graph data Structure Representation of Social Media using Graph Data Structure in Java It is

yogita pandurang chaudhari 1 Jan 16, 2022
gRPC and protocol buffers for Android, Kotlin, and Java.

Wire “A man got to have a code!” - Omar Little See the project website for documentation and APIs. As our teams and programs grow, the variety and vol

Square 3.9k Dec 23, 2022
This repository contains codes for various data structures and algorithms in C, C++, Java, Python, C#, Go, JavaScript and Kotlin.

Overview The goal of this project is to have codes for various data structures and algorithms - in C, C++, Java, Python, C#, Go, JavaScript and Kotlin

Manan 25 Mar 2, 2022
This repository contains all the Data Structures and Algorithms concepts and their implementation in several ways

An Open-Source repository that contains all the Data Structures and Algorithms concepts and their implementation in several ways, programming questions and Interview questions. The main aim of this repository is to help students who are learning Data Structures and Algorithms or preparing for an interview.

Pranay Gupta 691 Dec 31, 2022
Union, intersection, and set cardinality in loglog space

HyperMinHash-java A Java implementation of the HyperMinHash algorithm, presented by Yu and Weber. HyperMinHash allows approximating set unions, inters

LiveRamp 48 Sep 22, 2022
Popular Algorithms and Data Structures implemented in popular languages

Algos Community (college) maintained list of Algorithms and Data Structures implementations. Implemented Algorithms Algorithm C CPP Java Python Golang

IIIT Vadodara Open Source 1k Dec 28, 2022
A repository that contains Data Structure and Algorithms coded on Java

A repository that contains Data Structure and Algorithms coded on Java . It will also contain solutions of questions from Leetcode.

Akshat Gupta 6 Oct 15, 2022
🎓☕ Repository of lessons and exercises from loiane.training's course on data structure with Java

☕ Curso estrutura de dados com Java by @loiane.training Repositório com as aulas e exercícios do curso de estrutura de dados com Java da loiane.traini

Leticia Campos 2 Feb 1, 2022
Algorithm and Data Structrue

SWE241P Algorithm and Data Structure Ex1 TreeSet with Red-Black Tree HashSet LinkedList Set Ex 2 Selection Sort Insertion Sort Heap Sort Merge Sort Qu

Tiger Liu 4 Apr 13, 2022
Worker-queue implementation on top of Java and database

Database Queue Library provides worker-queue implementation on top of Java and database. Fintech company YooMoney uses db-queue in cases where reliabi

null 17 Dec 12, 2022
Data structures and algorithms exercises in java

Data structure and algorithms in Java About The Project [] In this repository you can find examples of data structure exercises solved in java and som

Luis Perez Contreras 1 Nov 25, 2021
Data structures & algorithms implemented in Java and solutions to leetcode problems.

Hello, World! ?? Hey everyone, I'm Sharad ☃ , and I'm a Software Engineer ?? at eGain! This repository ?? is all about data structures & algorithms an

Sharad Dutta 16 Dec 16, 2022
Open data platform based on flink. Now scaleph is supporting data integration with seatunnel on flink

scaleph The Scaleph project features data integration, develop, job schedule and orchestration and trys to provide one-stop data platform for develope

null 151 Jan 3, 2023
Dagger is an easy-to-use, configuration over code, cloud-native framework built on top of Apache Flink for stateful processing of real-time streaming data.

Dagger Dagger or Data Aggregator is an easy-to-use, configuration over code, cloud-native framework built on top of Apache Flink for stateful processi

Open DataOps Foundation 238 Dec 22, 2022
Flink Table Store is a unified streaming and batch store for building dynamic tables on Apache Flink

Flink Table Store is a unified streaming and batch store for building dynamic tables on Apache Flink

The Apache Software Foundation 366 Jan 1, 2023
Flink CDC Connectors is a set of source connectors for Apache Flink

Flink CDC Connectors is a set of source connectors for Apache Flink, ingesting changes from different databases using change data capture (CDC). The Flink CDC Connectors integrates Debezium as the engine to capture data changes.

null 6 Mar 23, 2022
Table-Computing (Simplified as TC) is a distributed light weighted, high performance and low latency stream processing and data analysis framework. Milliseconds latency and 10+ times faster than Flink for complicated use cases.

Table-Computing Welcome to the Table-Computing GitHub. Table-Computing (Simplified as TC) is a distributed light weighted, high performance and low la

Alibaba 34 Oct 14, 2022