This code base is retained for historical interest only, please visit Apache Incubator Repo for latest one

Related tags

Big data Kylin
Overview

Apache Kylin

Apache Kylin is an open source Distributed Analytics Engine to provide SQL interface and multi-dimensional analysis (OLAP) on Hadoop supporting extremely large datasets. Initial contributed by eBay Inc.

This code base is retained for historical interest only, please visit Apache Incubator Repo for latest code: https://github.com/apache/incubator-kylin

This github repository is no longer maintained, please visit kylin.apache.org instead. If you're seeking for help, please go with the apache kylin mail list http://kylin.apache.org/community/

Comments
  • Can't login the web server

    Can't login the web server

    i install in my linux. but i have an issue about login the error:“Unable to login, please check your username/password and make sure you have L2 access.” i didn't the reason. please help me

    bug 
    opened by xingfengshen 23
  • hcatalog lib not found

    hcatalog lib not found

    i install in my linux(centos6) use apache hive0.13, apache hbase0.98, apache hadoop2.4.0, kylin0.7.1, when I run kylin.sh start, i have an issue about hcatalog lib not found,

    i didn't the reason. please help me

    opened by zhouming2015 20
  •  ResourceStore error

    ResourceStore error

    when i run deploy.sh ,there is an error. In source code i found this sentense "static { knownImpl.add(HBaseResourceStore.class); knownImpl.add(FileResourceStore.class); } " and " for (Class<? extends ResourceStore> cls : knownImpl) {

                try {
                    r = cls.getConstructor(KylinConfig.class).newInstance(kylinConfig);
    

    " . when FileResourceStore.getConstructor there is an error.


    java.lang.reflect.InvocationTargetException at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:526) at com.kylinolap.common.persistence.ResourceStore.getStore(ResourceStore.java:66) at com.kylinolap.cube.CubeManager.getStore(CubeManager.java:640) at com.kylinolap.cube.CubeManager.loadAllCubeInstance(CubeManager.java:590) at com.kylinolap.cube.CubeManager.(CubeManager.java:118) at com.kylinolap.cube.CubeManager.getInstance(CubeManager.java:83) at com.kylinolap.job.SampleCubeSetupTest.after(SampleCubeSetupTest.java:63) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50) at org.junit.runners.ParentRunner$3.run(ParentRunner.java:238) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236) at org.junit.runners.ParentRunner.access$000(ParentRunner.java:53) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229) at org.junit.runners.ParentRunner.run(ParentRunner.java:309) at org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:264) at org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:153) at org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:124) at org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:200) at org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:153) at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:103) Caused by: java.lang.IllegalArgumentException: File not exist by 'kylin_metadata_qa@hbase:sjs_53_225:2181:/hbase': /home/imeda/zhangjp/Kylin/job/kylin_metadata_qa@hbase:sjs_53_225:2181:/hbase

    help wanted 
    opened by watertosea 17
  • Query planner somehow doesn't recognize table name in lower case

    Query planner somehow doesn't recognize table name in lower case

    Tried to run below SQL, failed with no detail reason on web UI. Found the reason from Tomcat log:

    SQL: select lstg_format_name, week_beg_dt, meta_categ_name, sum(price) as price
    from test_kylin_fact as fact 
    inner join test_category_groupings as cat on cat.leaf_categ_id = fact.leaf_categ_id
    and cat.site_id = fact.lstg_site_id
    inner join test_cal_dt as cal on cal.cal_dt = fact.cal_dt
    where week_beg_dt >= date'2013-01-01' and week_beg_dt <= date'2013-05-01'
    group by week_beg_dt, cat.meta_categ_name, lstg_format_name
    order by week_beg_dt
    User: ADMIN
    Success: false
    Duration: 0.002
    Project: onlyinner
    Cube Names: []
    Cuboid Ids: []
    Total scan count: 0
    Result row count: 0
    Accept Partial: true
    Hit Cache: true
    Message: error while executing SQL "select lstg_format_name, week_beg_dt, meta_categ_name, sum(price) as price
    from test_kylin_fact as fact 
    inner join test_category_groupings as cat on cat.leaf_categ_id = fact.leaf_categ_id
    and cat.site_id = fact.lstg_site_id
    inner join test_cal_dt as cal on cal.cal_dt = fact.cal_dt
    where week_beg_dt >= date'2013-01-01' and week_beg_dt <= date'2013-05-01'
    group by week_beg_dt, cat.meta_categ_name, lstg_format_name
    order by week_beg_dt LIMIT 50000": From line 3, column 12 to line 3, column 34: Table 'TEST_CATEGORY_GROUPINGS' not found
    

    Then, tried to use the exact table name (in upper case) rerun the SQL and succeed.

    SQL: select lstg_format_name, week_beg_dt, meta_categ_name, sum(price) as price
    from test_kylin_fact as fact 
    inner join TEST_CATEGORY_GROUPINGS as cat on cat.leaf_categ_id = fact.leaf_categ_id
    and cat.site_id = fact.lstg_site_id
    inner join test_cal_dt as cal on cal.cal_dt = fact.cal_dt
    where week_beg_dt >= date'2013-01-01' and week_beg_dt <= date'2013-05-01'
    group by week_beg_dt, cat.meta_categ_name, lstg_format_name
    order by week_beg_dt
    User: ADMIN
    Success: true
    Duration: 0.997
    Project: onlyinner
    Cube Names: [test_kylin_cube_with_slr_empty]
    Cuboid Ids: [423]
    Total scan count: 0
    Result row count: 0
    Accept Partial: true
    Hit Cache: false
    Message: null
    

    Tried the simplest SQL to validate if the table name in lower case is valid, the answer is true.

    SQL: select * from test_category_groupings
    User: ADMIN
    Success: true
    Duration: 0.052
    Project: onlyinner
    Cube Names: [test_kylin_cube_with_slr_empty]
    Cuboid Ids: []
    Total scan count: 0
    Result row count: 144
    Accept Partial: true
    Hit Cache: false
    Message: null
    

    So, why the first SQL failed with table not found?

    opened by branky 12
  • Urgent--Kylin:Job Failure--You do not own the lock: /kylin/job_engine/lock/Build_Test_Cube_Engine

    Urgent--Kylin:Job Failure--You do not own the lock: /kylin/job_engine/lock/Build_Test_Cube_Engine

    The new fact table has been written to $KYLIN_METADATA_URL/data/TEST_KYLIN_FACT.csv

    SCP file /tmp/TEST_CAL_DT.csv to /tmp/kylin L4J [2015-01-01 03:49:53,423][INFO][org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation] - Closing master protocol: MasterService L4J [2015-01-01 03:49:53,422][DEBUG][com.kylinolap.job.engine.JobEngine] - Closing HBASE connection Exception in thread "Thread-11" java.lang.RuntimeException: java.lang.IllegalMonitorStateException: You do not own the lock: /kylin/job_engine/lock/Build_Test_Cube_Engine at com.kylinolap.job.engine.JobEngine.releaseLock(JobEngine.java:107) at com.kylinolap.job.engine.JobEngine.access$100(JobEngine.java:49) at com.kylinolap.job.engine.JobEngine$1.run(JobEngine.java:90) Caused by: java.lang.IllegalMonitorStateException: You do not own the lock: /kylin/job_engine/lock/Build_Test_Cube_Engine at org.apache.curator.framework.recipes.locks.InterProcessMutex.release(InterProcessMutex.java:128) at com.kylinolap.job.engine.JobEngine.releaseLock(JobEngine.java:98) ... 2 more L4J [2015-01-01 03:49:53,456][INFO][org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation] - Closing zookeeper sessionid=0x14aa44c0b92002e L4J [2015-01-01 03:49:53,460][INFO][org.apache.zookeeper.ClientCnxn] - EventThread shut down L4J [2015-01-01 03:49:53,460][INFO][org.apache.zookeeper.ZooKeeper] - Session: 0x14aa44c0b92002e closed

    Results :

    Tests run: 0, Failures: 0, Errors: 0, Skipped: 0

    [INFO] ------------------------------------------------------------------------ [INFO] Reactor Summary: [INFO] [INFO] Kylin:HadoopOLAPEngine ............................. SUCCESS [ 0.004 s] [INFO] Kylin:AtopCalcite .................................. SUCCESS [ 2.521 s] [INFO] Kylin:Common ....................................... SUCCESS [ 1.436 s] [INFO] Kylin:Metadata ..................................... SUCCESS [ 0.224 s] [INFO] Kylin:Dictionary ................................... SUCCESS [ 0.238 s] [INFO] Kylin:Cube ......................................... SUCCESS [ 1.019 s] [INFO] Kylin:Job .......................................... FAILURE [ 49.139 s] [INFO] Kylin:Storage ...................................... SKIPPED [INFO] Kylin:Query ........................................ SKIPPED [INFO] Kylin:RESTServer ................................... SKIPPED [INFO] Kylin:Jdbc ......................................... SKIPPED [INFO] ------------------------------------------------------------------------ [INFO] BUILD FAILURE

    help wanted 
    opened by ghost 11
  • Failed to execute goal on project kylin-query

    Failed to execute goal on project kylin-query

    I build my own hadoop, hbase, hive environments. All works well. I download the Kylin, and execute mvn clean install -DskipTests. When i execute "mvn clean install -DskipTests", there is a problem as follows.

    [/home/yc/Kylin]# mvn clean install -DskipTests

    [INFO] ------------------------------------------------------------------------
    [INFO] Reactor Summary:
    [INFO] 
    [INFO] Kylin:HadoopOLAPEngine ............................. SUCCESS [  0.274 s]
    [INFO] Kylin:AtopCalcite .................................. SUCCESS [  2.802 s]
    [INFO] Kylin:Common ....................................... SUCCESS [  1.974 s]
    [INFO] Kylin:Metadata ..................................... SUCCESS [  1.012 s]
    [INFO] Kylin:Dictionary ................................... SUCCESS [  1.249 s]
    [INFO] Kylin:Cube ......................................... SUCCESS [  1.321 s]
    [INFO] Kylin:Job .......................................... SUCCESS [ 13.970 s]
    [INFO] Kylin:Storage ...................................... SUCCESS [  1.623 s]
    [INFO] Kylin:Query ........................................ FAILURE [25:09 min]
    [INFO] Kylin:RESTServer ................................... SKIPPED
    [INFO] Kylin:Jdbc ......................................... SKIPPED
    [INFO] ------------------------------------------------------------------------
    [INFO] BUILD FAILURE
    [INFO] ------------------------------------------------------------------------
    [INFO] Total time: 25:34 min
    [INFO] Finished at: 2014-12-02T04:48:43-05:00
    [INFO] Final Memory: 100M/1429M
    [INFO] ------------------------------------------------------------------------
    [ERROR] Failed to execute goal on project kylin-query: Could not resolve dependencies for project
    com.kylinolap:kylin-query:jar:0.6.3-SNAPSHOT: The following artifacts could not be resolved: 
    xalan:xalan:jar:2.7.1, com.h2database:h2:jar:1.3.174, org.apache.hive:hive-metastore:jar:0.13.0, 
    org.apache.derby:derby:jar:10.10.1.1, org.datanucleus:datanucleus-api-jdo:jar:3.2.6, 
    org.datanucleus:datanucleus-core:jar:3.2.10, org.apache.hive:hive-serde:jar:0.13.0, 
    org.apache.hive:hive-service:jar:0.13.0, org.eclipse.jetty.aggregate:jetty-all:jar:7.6.0.v20120127, 
    javax.mail:mail:jar:1.4.1: Could not transfer artifact xalan:xalan:jar:2.7.1 from/to central 
    (http://repo.maven.apache.org/maven2): GET request of: xalan/xalan/2.7.1/xalan-2.7.1.jar from central 
    failed: Premature end of Content-Length delimited message body (expected: 3176148; received: 
    1762929 -> [Help 1]
    [ERROR] 
    [ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
    [ERROR] Re-run Maven using the -X switch to enable full debug logging.
    [ERROR] 
    [ERROR] For more information about the errors and possible solutions, please read the following articles:
    [ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/DependencyResolutionException
    [ERROR] 
    [ERROR] After correcting the problems, you can resume the build with the command
    [ERROR]   mvn <goals> -rf :kylin-query
    

    Is there someone meet the problem, and how to fixed it?

    help wanted 
    opened by ChengHuaUESTC 10
  • Can't get cube source record size.

    Can't get cube source record size.

    Dear All! I deploy kylin on Apache Hadoop, when i build the job , it failed on the last step. here is the some log message:

    [QuartzScheduler_Worker-4]:[2014-11-24 19:24:47,280][INFO][com.kylinolap.job.flow.JobFlowListener.jobWasExecuted(JobFlowListener.java:93)] - cube_job_group.test_kylin_cube_without_slr_empty.30bbd69c-33ae-4983-aaf7-f46ab42b88dd.14 status: FINISHED
    [QuartzScheduler_Worker-4]:[2014-11-24 19:24:47,280][DEBUG][com.kylinolap.cube.CubeManager.loadCubeInstance(CubeManager.java:604)] - Loading CubeInstance kylin_metadata_qa(key='/cube/test_kylin_cube_without_slr_empty.json')@kylin_metadata_qa@hbase:192.168.44.21:2181:/hbase9827
    [QuartzScheduler_Worker-4]:[2014-11-24 19:24:47,295][DEBUG][com.kylinolap.common.persistence.ResourceStore.putResource(ResourceStore.java:166)] - Saving resource /job/30bbd69c-33ae-4983-aaf7-f46ab42b88dd (Store kylin_metadata_qa@hbase:192.168.44.21:2181:/hbase9827)
    [QuartzScheduler_Worker-4]:[2014-11-24 19:24:47,300][INFO][com.kylinolap.job.flow.JobFlowListener.updateCubeSegmentInfoOnSucceed(JobFlowListener.java:238)] - Updating cube segment FULL_BUILD for cube test_kylin_cube_without_slr_empty
    [QuartzScheduler_Worker-4]:[2014-11-24 19:24:47,301][ERROR][com.kylinolap.job.flow.JobFlowListener.jobWasExecuted(JobFlowListener.java:117)] - Can't get cube source record size.
    java.lang.RuntimeException: Can't get cube source record size.
            at com.kylinolap.job.flow.JobFlowListener.updateCubeSegmentInfoOnSucceed(JobFlowListener.java:274)
            at com.kylinolap.job.flow.JobFlowListener.jobWasExecuted(JobFlowListener.java:99)
            at org.quartz.core.QuartzScheduler.notifyJobListenersWasExecuted(QuartzScheduler.java:1985)
            at org.quartz.core.JobRunShell.notifyJobListenersComplete(JobRunShell.java:340)
            at org.quartz.core.JobRunShell.run(JobRunShell.java:224)
            at org.quartz.simpl.SimpleThreadPool$WorkerThread.run(SimpleThreadPool.java:573)
    

    here is some version info: Kylin 0.6.2 hadoop 2.4.1 hbase 0.98.4-hadoop2 zookeeper 3.4.5 hive 0.13.0

    I find some config info on hbase

    get 'kylin_metadata_qa_job', '/job/1eaf05f8-e9f5-4dd7-9748-ab9b3098a512'
    {
        "uuid": "1eaf05f8-e9f5-4dd7-9748-ab9b3098a512",
        "name": "test_kylin_cube_without_slr_empty - FULL_BUILD - BUILD - PST 2014-11-24 23:00:15",
        "type": "BUILD",
        "duration": 461,
        "steps": [
            {
                "name": "Create Intermediate Flat Hive Table",
                "info": null,
                "interruptCmd": null,
                "sequence_id": 0,
                "exec_cmd": "hive -e "DROPTABLEIFEXISTSkylin_intermediate_test_kylin_cube_without_slr_desc_FULL_BUILD_1eaf05f8_e9f5_4dd7_9748_ab9b3098a512;nCREATEEXTERNALTABLEIFNOTEXISTSkylin_intermediate_test_kylin_cube_without_slr_desc_FULL_BUILD_1eaf05f8_e9f5_4dd7_9748_ab9b3098a512n(nCAL_DTdaten,
                LEAF_CATEG_IDintn,
                LSTG_SITE_IDintn,
                META_CATEG_NAMEstringn,
                CATEG_LVL2_NAMEstringn,
                CATEG_LVL3_NAMEstringn,
                LSTG_FORMAT_NAMEstringn,
                SLR_SEGMENT_CDsmallintn,
                PRICEdecimal(38,
                16)n,
                SELLER_IDbigintn)
    ...}
    ]
    

    in source code: com.kylinolap.job.flow.JobFlowListener.updateCubeSegmentInfoOnSucceed(JobFlowListener.java:272)

                   JobStep createFlatTableStep = jobInstance.findStep(JobConstants.STEP_NAME_CREATE_FLAT_HIVE_TABLE);
                    if (null != createFlatTableStep) {
                        String sourceRecordsSize = createFlatTableStep.getInfo(JobInstance.SOURCE_RECORDS_SIZE);
                        if (sourceRecordsSize == null || sourceRecordsSize.equals("")) {
                            throw new RuntimeException("Can't get cube source record size.");
                        }
                        sourceSize = Long.parseLong(sourceRecordsSize);
                    } else {
                        log.info("No step with name '" + JobConstants.STEP_NAME_CREATE_FLAT_HIVE_TABLE + "' is found");
                    }
    

    the "info" is null, cased throw a RuntimeException, some one have idea with it??

    help wanted 
    opened by Yancey1989 10
  • Cube TEST_SERVER dosen't contain any READY segment

    Cube TEST_SERVER dosen't contain any READY segment

    I read the source code and didn't find any method to add segment.

    error: Caused by: com.kylinolap.rest.exception.InternalErrorException: Cube TEST_SERVER dosen't contain any READY segment at com.kylinolap.rest.service.CubeService.enableCube(CubeService.java:347) at com.kylinolap.rest.service.CubeService$$FastClassByCGLIB$$402bf2b9.invoke() at net.sf.cglib.proxy.MethodProxy.invoke(MethodProxy.java:204) at org.springframework.aop.framework.Cglib2AopProxy$CglibMethodInvocation.invokeJoinpoint(Cglib2AopProxy.java:689) at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:150) at org.springframework.security.access.intercept.aopalliance.MethodSecurityInterceptor.invoke(MethodSecurityInterceptor.java:64) at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:172) at org.springframework.aop.framework.Cglib2AopProxy$DynamicAdvisedInterceptor.intercept(Cglib2AopProxy.java:622) at com.kylinolap.rest.service.CubeService$$EnhancerByCGLIB$$5e6d8dba.enableCube() at com.kylinolap.rest.controller.CubeController.enableCube(CubeController.java:267) ... 82 more

    help wanted 
    opened by watertosea 9
  • Installation fails if multiple ZK

    Installation fails if multiple ZK

    Came across this minor issue in installation . If you have multiple ZK in your cluster installation fails giving error

    java.lang.IllegalArgumentException: Zookeeper connection string is null or empty

    This is because the deploy.sh scripts gets the ZK quorum from HbaseConfigPrinter at /job/src/main/java/com/kylinolap/job/deployment/HbaseConfigPrinter.java#L46

    which returns all the ZK configured but the installation script deploy.sh expects only 1.

    deploy.sh - https://github.com/KylinOLAP/Kylin/blob/master/deploy.sh#L126

    As a workaround you can set KYLIN_ZOOKEEPER_URL="ZK1:2181:/hbase-unsecure"

    But better solution is to change /job/src/main/java/com/kylinolap/job/deployment/HbaseConfigPrinter.java#L71

    Tokenize the string using ":" operator and pick up the first ZK from that and return that 1 ZK value only . Let me know if i should create a patch

    enhancement 
    opened by kamaldeep-ebay 7
  • java.lang.ClassNotFoundException: org.apache.hadoop.hbase.util.CompressionTest

    java.lang.ClassNotFoundException: org.apache.hadoop.hbase.util.CompressionTest

    when I running ./deploy.sh. There is a problem...

    Exception in thread "main" java.lang.NoClassDefFoundError: org/apache/hadoop/hbase/util/CompressionTest
            at com.kylinolap.job.tools.LZOSupportnessChecker.getSupportness(LZOSupportnessChecker.java:15)
            at com.kylinolap.job.deployment.HbaseConfigPrinter$ConfigLoader$1.loadValue(HbaseConfigPrinter.java:67)
            at com.kylinolap.job.deployment.HbaseConfigPrinter.printConfigs(HbaseConfigPrinter.java:42)
            at com.kylinolap.job.deployment.HbaseConfigPrinter.main(HbaseConfigPrinter.java:30)
            at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
            at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
            at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
            at java.lang.reflect.Method.invoke(Method.java:606)
            at org.apache.hadoop.util.RunJar.main(RunJar.java:212)
    Caused by: java.lang.ClassNotFoundException: org.apache.hadoop.hbase.util.CompressionTest
            at java.net.URLClassLoader$1.run(URLClassLoader.java:366)
            at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
            at java.security.AccessController.doPrivileged(Native Method)
            at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
            at java.lang.ClassLoader.loadClass(ClassLoader.java:425)
            at java.lang.ClassLoader.loadClass(ClassLoader.java:358)
            ... 9 more
    ERROR exit from ./deploy.sh : line 105 with exit code 1
    

    Obvious, there hasn't org.apache.hadoop.hbase.util.CompressionTest jar package. What could I do to resolve it?

    opened by ChengHuaUESTC 7
  • Kylin Cuboid Whitelist

    Kylin Cuboid Whitelist

    Proposal from Hongbin about Cuboid White List:

    Logically, a cube contains cuboids representing all combinations of dimensions. Apparently, a naive cube building strategy that materializes all cuboids will easily meet curse-of-dimension problems. Currently Kylin leverages a strategy called "aggregation groups" to reduce the number of cuboids need being materialized.

    However, if the query pattern is simple and fixed, the "aggregation group" strategy is still not efficient enough. For example, suppose there're five dimensions, namely A,B,C,D and E. The data modeler is sure that only combinations (A,B,C), (D,E), (A,E) will be queried, so he’ll use the aggregation group tool to optimize his cube definition. However, whatever aggregation group he chooses, lots of useless combinations would be materialized.

    With a new strategy called "cuboid whitelist", data modelers can guide Kylin to only materialize the cuboids he's interested in. Depending on the whitelist, Kylin will materialize the minimal set of cuboids to cover each cuboid in the whitelist. To support this, the following functionalities should be added:

    1. Front-end/UI for specifying whitelist members, and persistent them to cube description.
    2. Enhanced job engine scheduler that will calculate a minimal spanning build tree based on the whitelist.
    3. (OPTIONAL) Enhanced job engine to support dynamic whitelist, trigger new builds for lately added whitelist members.

    Hongbin Ma

    enhancement 
    opened by lukehan 6
  • Bump jackson-databind from 2.2.3 to 2.12.6.1

    Bump jackson-databind from 2.2.3 to 2.12.6.1

    Bumps jackson-databind from 2.2.3 to 2.12.6.1.

    Commits

    Dependabot compatibility score

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
    • @dependabot use these labels will set the current labels as the default for future PRs for this repo and language
    • @dependabot use these reviewers will set the current reviewers as the default for future PRs for this repo and language
    • @dependabot use these assignees will set the current assignees as the default for future PRs for this repo and language
    • @dependabot use this milestone will set the current milestone as the default for future PRs for this repo and language

    You can disable automated security fix PRs for this repo from the Security Alerts page.

    dependencies 
    opened by dependabot[bot] 0
  • Bump calcite-core from 0.9.1-incubating to 1.32.0

    Bump calcite-core from 0.9.1-incubating to 1.32.0

    Bumps calcite-core from 0.9.1-incubating to 1.32.0.

    Commits
    • 413eded [CALCITE-5275] Release Calcite 1.32.0
    • 57aafa3 Cosmetic changes to release notes
    • 2624925 [CALCITE-5262] Add many spatial functions, including support for WKB (well-kn...
    • 479afa6 [CALCITE-5278] Upgrade Janino from 3.1.6 to 3.1.8
    • 1167b12 [CALCITE-5270] JDBC adapter should not generate 'FILTER (WHERE)' in Firebolt ...
    • 89c940c [CALCITE-5241] Implement CHAR function for MySQL and Spark, also JDBC '{fn CH...
    • d20fd09 [CALCITE-5274] Improve DocumentBuilderFactory in DiffRepository test class by...
    • 6302e6f [CALCITE-5277] Make EnumerableRelImplementor stashedParameters order determin...
    • baeecc8 [CALCITE-5251] Support SQL hint for Snapshot
    • ba80b91 [CALCITE-5263] Improve XmlFunctions by using an XML DocumentBuilder
    • Additional commits viewable in compare view

    Dependabot compatibility score

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
    • @dependabot use these labels will set the current labels as the default for future PRs for this repo and language
    • @dependabot use these reviewers will set the current reviewers as the default for future PRs for this repo and language
    • @dependabot use these assignees will set the current assignees as the default for future PRs for this repo and language
    • @dependabot use this milestone will set the current milestone as the default for future PRs for this repo and language

    You can disable automated security fix PRs for this repo from the Security Alerts page.

    dependencies 
    opened by dependabot[bot] 0
  • Bump xalan from 2.7.1 to 2.7.2

    Bump xalan from 2.7.1 to 2.7.2

    Bumps xalan from 2.7.1 to 2.7.2.

    Dependabot compatibility score

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
    • @dependabot use these labels will set the current labels as the default for future PRs for this repo and language
    • @dependabot use these reviewers will set the current reviewers as the default for future PRs for this repo and language
    • @dependabot use these assignees will set the current assignees as the default for future PRs for this repo and language
    • @dependabot use this milestone will set the current milestone as the default for future PRs for this repo and language

    You can disable automated security fix PRs for this repo from the Security Alerts page.

    dependencies 
    opened by dependabot[bot] 0
  • Bump jsch from 0.1.51 to 0.1.54

    Bump jsch from 0.1.51 to 0.1.54

    Bumps jsch from 0.1.51 to 0.1.54.

    Dependabot compatibility score

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
    • @dependabot use these labels will set the current labels as the default for future PRs for this repo and language
    • @dependabot use these reviewers will set the current reviewers as the default for future PRs for this repo and language
    • @dependabot use these assignees will set the current assignees as the default for future PRs for this repo and language
    • @dependabot use this milestone will set the current milestone as the default for future PRs for this repo and language

    You can disable automated security fix PRs for this repo from the Security Alerts page.

    dependencies 
    opened by dependabot[bot] 0
  • Bump tomcat-catalina from 7.0.52 to 7.0.81 in /server

    Bump tomcat-catalina from 7.0.52 to 7.0.81 in /server

    Bumps tomcat-catalina from 7.0.52 to 7.0.81.

    Dependabot compatibility score

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
    • @dependabot use these labels will set the current labels as the default for future PRs for this repo and language
    • @dependabot use these reviewers will set the current reviewers as the default for future PRs for this repo and language
    • @dependabot use these assignees will set the current assignees as the default for future PRs for this repo and language
    • @dependabot use this milestone will set the current milestone as the default for future PRs for this repo and language

    You can disable automated security fix PRs for this repo from the Security Alerts page.

    dependencies 
    opened by dependabot[bot] 0
  • Bump commons-email from 1.1 to 1.5 in /job

    Bump commons-email from 1.1 to 1.5 in /job

    Bumps commons-email from 1.1 to 1.5.

    Dependabot compatibility score

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
    • @dependabot use these labels will set the current labels as the default for future PRs for this repo and language
    • @dependabot use these reviewers will set the current reviewers as the default for future PRs for this repo and language
    • @dependabot use these assignees will set the current assignees as the default for future PRs for this repo and language
    • @dependabot use this milestone will set the current milestone as the default for future PRs for this repo and language

    You can disable automated security fix PRs for this repo from the Security Alerts page.

    dependencies 
    opened by dependabot[bot] 0
Releases(v0.6.5)
  • v0.6.5(Feb 6, 2015)

  • v0.6.4(Jan 14, 2015)

    This is last and stable release on github.com. All code will migrate to Apache Git Repo.

    Further release will from Apache Git and following Apache Release Process.

    There are many bug fixes in this release, but no big feature be introduced. One big enhancement is Cube Designer re-design to have more better experience.

    Source code(tar.gz)
    Source code(zip)
  • v0.6.2(Nov 20, 2014)

    Changes Highlights:

    7070 as default Tomcat port Default account and password changed to ADMIN/KYLIN

    Login with Project: User have to select project which they have permission to access when login through Kylin web. Isolated Hive Tables by project: Hive tables will isolated by project, users could create different projects for different data models. Also, Hive table metadata "sync up" to Kylin based on project. Job List page: Jobs now will order by last modified time.

    Fixed Bugs:

    • Can't sync up Hive tables from other database rather than 'default' : #69
    • BarChart and PieChart not work correctly on Kylin web: #42
    • Cube's last modified time will set to wrong value: #58

    Please refer to Kylin's Github Issues for more detail about fixed bugs.

    There's no any changes relative to ODBC Driver, your existing dashboards will not be impacted.

    Source code(tar.gz)
    Source code(zip)
Owner
Kylin OLAP Engine
Extreme OLAP Engine for Big Data
Kylin OLAP Engine
Hadoop library for large-scale data processing, now an Apache Incubator project

Apache DataFu Follow @apachedatafu Apache DataFu is a collection of libraries for working with large-scale data in Hadoop. The project was inspired by

LinkedIn's Attic 589 Apr 1, 2022
Oryx 2: Lambda architecture on Apache Spark, Apache Kafka for real-time large scale machine learning

Oryx 2 is a realization of the lambda architecture built on Apache Spark and Apache Kafka, but with specialization for real-time large scale machine l

Oryx Project 1.8k Dec 28, 2022
Mirror of Apache Storm

Master Branch: Storm is a distributed realtime computation system. Similar to how Hadoop provides a set of general primitives for doing batch processi

The Apache Software Foundation 6.4k Jan 3, 2023
Apache Heron (Incubating) is a realtime, distributed, fault-tolerant stream processing engine from Twitter

Heron is a realtime analytics platform developed by Twitter. It has a wide array of architectural improvements over it's predecessor. Heron in Apache

The Apache Software Foundation 3.6k Dec 28, 2022
Apache Flink

Apache Flink Apache Flink is an open source stream processing framework with powerful stream- and batch-processing capabilities. Learn more about Flin

The Apache Software Foundation 20.4k Jan 5, 2023
Apache Druid: a high performance real-time analytics database.

Website | Documentation | Developer Mailing List | User Mailing List | Slack | Twitter | Download Apache Druid Druid is a high performance real-time a

The Apache Software Foundation 12.3k Jan 9, 2023
Apache Hive

Apache Hive (TM) The Apache Hive (TM) data warehouse software facilitates reading, writing, and managing large datasets residing in distributed storag

The Apache Software Foundation 4.6k Dec 28, 2022
Real-time Query for Hadoop; mirror of Apache Impala

Welcome to Impala Lightning-fast, distributed SQL queries for petabytes of data stored in Apache Hadoop clusters. Impala is a modern, massively-distri

Cloudera 27 Dec 28, 2022
Apache Dubbo漏洞测试Demo及其POC

DubboPOC Apache Dubbo 漏洞POC 持续更新中 CVE-2019-17564 CVE-2020-1948 CVE-2020-1948绕过 CVE-2021-25641 CVE-2021-30179 others 免责声明 项目仅供学习使用,任何未授权检测造成的直接或者间接的后果及

lz2y 19 Dec 12, 2022
A scalable, mature and versatile web crawler based on Apache Storm

StormCrawler is an open source collection of resources for building low-latency, scalable web crawlers on Apache Storm. It is provided under Apache Li

DigitalPebble Ltd 776 Jan 2, 2023
Flink CDC Connectors is a set of source connectors for Apache Flink

Flink CDC Connectors is a set of source connectors for Apache Flink, ingesting changes from different databases using change data capture (CDC). The Flink CDC Connectors integrates Debezium as the engine to capture data changes.

null 6 Mar 23, 2022
Twitter's collection of LZO and Protocol Buffer-related Hadoop, Pig, Hive, and HBase code.

Elephant Bird About Elephant Bird is Twitter's open source library of LZO, Thrift, and/or Protocol Buffer-related Hadoop InputFormats, OutputFormats,

Twitter 1.1k Jan 5, 2023
Google Mr4c GNU Lesser 3 Google Mr4c MR4C is an implementation framework that allows you to run native code within the Hadoop execution framework. License: GNU Lesser 3, .

Introduction to the MR4C repo About MR4C MR4C is an implementation framework that allows you to run native code within the Hadoop execution framework.

Google 911 Dec 9, 2022
MapReduce Code for Counting the numbers in JAVA

Anurag000-rgb/MapReduce-Repetation_Counting MapReduce Code for Counting the numbers in JAVA Basically in this project But it good to write in Apache Spark using scala Rather In Apache MapReduce

Anurag Panda 2 Mar 1, 2022
Please visit https://github.com/h2oai/h2o-3 for latest H2O

Caution: H2O-3 is now the current H2O! Please visit https://github.com/h2oai/h2o-3 H2O H2O makes Hadoop do math! H2O scales statistics, machine learni

H2O.ai 2.2k Jan 6, 2023
Please visit https://github.com/h2oai/h2o-3 for latest H2O

Caution: H2O-3 is now the current H2O! Please visit https://github.com/h2oai/h2o-3 H2O H2O makes Hadoop do math! H2O scales statistics, machine learni

H2O.ai 2.2k Dec 9, 2022
bedroom is a latest version fabric base for minecraft clients.

bedroom is a latest version fabric base for minecraft clients. this was made to serve as the base for beach house, i'm just making it public so others

beach house development 27 Dec 15, 2022
A high performance replicated log service. (The development is moved to Apache Incubator)

Apache DistributedLog (incubating) Apache DistributedLog (DL) is a high-throughput, low-latency replicated log service, offering durability, replicati

Twitter 2.2k Dec 29, 2022
Hadoop library for large-scale data processing, now an Apache Incubator project

Apache DataFu Follow @apachedatafu Apache DataFu is a collection of libraries for working with large-scale data in Hadoop. The project was inspired by

LinkedIn's Attic 589 Apr 1, 2022
Create historical stock market simulation.

Market Simulation Create historical stock market simulation. THIS PROJECT IS MADE FOR SCHOOL! Overview Material designed for Java. Tested with openjdk

null 1 Jan 29, 2022