Apache Cayenne is an open source persistence framework licensed under the Apache License

Overview

Apache Cayenne

Maven Central Build Status

Apache Cayenne Logo

Apache Cayenne is an open source persistence framework licensed under the Apache License, providing object-relational mapping (ORM) and remoting services.

Table Of Contents

Quick Start

Create XML mapping

Modeler GUI application

You can use Cayenne Modeler to manually create Cayenne project without DB. Binary distributions can be downloaded from https://cayenne.apache.org/download/

Modeler

See tutorial https://cayenne.apache.org/docs/4.1/getting-started-guide/

Maven plugin

Additionally you can use Cayenne Maven (or Gradle) plugin to create model based on existing DB structure. Here is example of Cayenne Maven plugin setup that will do it:

<plugin>
    <groupId>org.apache.cayenne.pluginsgroupId>
    <artifactId>cayenne-maven-pluginartifactId>
    <version>4.1version>

    <dependencies>
        <dependency>
            <groupId>mysqlgroupId>
            <artifactId>mysql-connector-javaartifactId>
            <version>8.0.13version>
        dependency>
    dependencies>

    <configuration>
        <map>${project.basedir}/src/main/resources/demo.map.xmlmap>
        <cayenneProject>${project.basedir}/src/main/resources/cayenne-demo.xmlcayenneProject>
        <dataSource>
            <url>jdbc:mysql://localhost:3306/cayenne_demourl>
            <driver>com.mysql.cj.jdbc.Driverdriver>
            <username>userusername>
            <password>passwordpassword>
        dataSource>
        <dbImport>
            <defaultPackage>org.apache.cayenne.demo.modeldefaultPackage>
        dbImport>
    configuration>
plugin>

Run it:

mvn cayenne:cdbimport
mvn cayenne:cgen

See tutorial https://cayenne.apache.org/docs/4.1/getting-started-db-first/

Gradle plugin

And here is example of Cayenne Gradle plugin setup:

buildscript {
    repositories {
        mavenCentral()
    }
    dependencies {
        classpath group: 'org.apache.cayenne.plugins', name: 'cayenne-gradle-plugin', version: '4.1'
        classpath 'mysql:mysql-connector-java:8.0.13'
    }
}

apply plugin: 'org.apache.cayenne'
cayenne.defaultDataMap 'demo.map.xml'

cdbimport {   
    cayenneProject 'cayenne-demo.xml'

    dataSource {
        driver 'com.mysql.cj.jdbc.Driver'
        url 'jdbc:mysql://127.0.0.1:3306/cayenne_demo'
        username 'user'
        password 'password'
    }

    dbImport {
        defaultPackage = 'org.apache.cayenne.demo.model'
    }
}

cgen.dependsOn cdbimport
compileJava.dependsOn cgen

Run it:

gradlew build

Include Cayenne into project

Maven
<dependencies>
    <dependency>
        <groupId>org.apache.cayennegroupId>
        <artifactId>cayenne-serverartifactId>
        <version>4.1version>
    dependency>
dependencies>
Gradle
compile group: 'org.apache.cayenne', name: 'cayenne-server', version: '4.1'
 
// or, if Gradle plugin is used
compile cayenne.dependency('server')

Create Cayenne Runtime

ServerRuntime cayenneRuntime = ServerRuntime.builder()
    .addConfig("cayenne-demo.xml")
    .dataSource(DataSourceBuilder
             .url("jdbc:mysql://localhost:3306/cayenne_demo")
             .driver("com.mysql.cj.jdbc.Driver")
             .userName("username")
             .password("password")
             .build())
    .build();

Create New Objects

ObjectContext context = cayenneRuntime.newContext();

Artist picasso = context.newObject(Artist.class);
picasso.setName("Pablo Picasso");
picasso.setDateOfBirth(LocalDate.of(1881, 10, 25));

Gallery metropolitan = context.newObject(Gallery.class);
metropolitan.setName("Metropolitan Museum of Art");

Painting girl = context.newObject(Painting.class);
girl.setName("Girl Reading at a Table");

Painting stein = context.newObject(Painting.class);
stein.setName("Gertrude Stein");

picasso.addToPaintings(girl);
picasso.addToPaintings(stein);

girl.setGallery(metropolitan);
stein.setGallery(metropolitan);

context.commitChanges();

Queries

Select Objects
List<Painting> paintings = ObjectSelect.query(Painting.class)
        .where(Painting.ARTIST.dot(Artist.DATE_OF_BIRTH).lt(LocalDate.of(1900, 1, 1)))
        .prefetch(Painting.ARTIST.joint())
        .select(context);
Aggregate functions
// this is artificial property signaling that we want to get full object
Property<Artist> artistProperty = Property.createSelf(Artist.class);

List<Object[]> artistAndPaintingCount = ObjectSelect.columnQuery(Artist.class, artistProperty, Artist.PAINTING_ARRAY.count())
    .where(Artist.ARTIST_NAME.like("a%"))
    .having(Artist.PAINTING_ARRAY.count().lt(5L))
    .orderBy(Artist.PAINTING_ARRAY.count().desc(), Artist.ARTIST_NAME.asc())
    .select(context);

for(Object[] next : artistAndPaintingCount) {
    Artist artist = (Artist)next[0];
    long paintingsCount = (Long)next[1];
    System.out.println(artist.getArtistName() + " has " + paintingsCount + " painting(s)");
}
Raw SQL queries
paintingNames = SQLSelect .scalarQuery(String.class, "SELECT PAINTING_TITLE FROM PAINTING WHERE ESTIMATED_PRICE > #bind($price)") .params("price", 100000) .select(context); // Insert values int inserted = SQLExec .query("INSERT INTO ARTIST (ARTIST_ID, ARTIST_NAME) VALUES (#bind($id), #bind($name))") .paramsArray(55, "Picasso") .update(context); ">
// Selecting objects
List<Painting> paintings = SQLSelect
    .query(Painting.class, "SELECT * FROM PAINTING WHERE PAINTING_TITLE LIKE #bind($title)")
    .params("title", "painting%")
    .upperColumnNames()
    .localCache()
    .limit(100)
    .select(context);

// Selecting scalar values
List<String> paintingNames = SQLSelect
    .scalarQuery(String.class, "SELECT PAINTING_TITLE FROM PAINTING WHERE ESTIMATED_PRICE > #bind($price)")
    .params("price", 100000)
    .select(context);

// Insert values
int inserted = SQLExec
    .query("INSERT INTO ARTIST (ARTIST_ID, ARTIST_NAME) VALUES (#bind($id), #bind($name))")
    .paramsArray(55, "Picasso")
    .update(context);

Documentation

Getting Started

https://cayenne.apache.org/docs/4.1/getting-started-guide/

Getting Started Db-First

https://cayenne.apache.org/docs/4.1/getting-started-db-first/

Full documentation

https://cayenne.apache.org/docs/4.1/cayenne-guide/

JavaDoc

https://cayenne.apache.org/docs/4.1/api/

About

With a wealth of unique and powerful features, Cayenne can address a wide range of persistence needs. Cayenne seamlessly binds one or more database schemas directly to Java objects, managing atomic commit and rollbacks, SQL generation, joins, sequences, and more. With Cayenne's Remote Object Persistence, those Java objects can even be persisted out to clients via Web Services.

Cayenne is designed to be easy to use, without sacrificing flexibility or design. To that end, Cayenne supports database reverse engineering and generation, as well as a Velocity-based class generation engine. All of these functions can be controlled directly through the CayenneModeler, a fully functional GUI tool. No cryptic XML or annotation based configuration is required! An entire database schema can be mapped directly to Java objects within minutes, all from the comfort of the GUI-based CayenneModeler.

Cayenne supports numerous other features, including caching, a complete object query syntax, relationship pre-fetching, on-demand object and relationship faulting, object inheritance, database auto-detection, and generic persisted objects. Most importantly, Cayenne can scale up or down to virtually any project size. With a mature, 100% open source framework, an energetic user community, and a track record of solid performance in high-volume environments, Cayenne is an exceptional choice for persistence services.

Collaboration

License

Cayenne is available as free and open source under the Apache License, Version 2.0.

Comments
  • QueryCache improvements for local caching

    QueryCache improvements for local caching

    Added getQueryCache to the ObjectContext interface since this is already implemented by BaseContext anyway and makes accessing the cache much easier.

    Revised signature for QueryCache.remove(String) to be remove(QueryMetadata) to increase understandability. It was never clear how to use this method before.

    Added QueryCache.clearLocalCache method. This is now called automatically at the end of the request-response loop in StatelessContextRequestHandler and by BaseContext.finalize. This will prevent memory leaking from locally cached data in cases where the cache is not configured to expire entries based on time.

    Added QueryCache.debugListCacheKeys method to list all keys (prefixed by cache group) in the cache for debugging purposes.

    opened by johnthuss 6
  • Add new Maven profile postgres-docker

    Add new Maven profile postgres-docker

    • Useful to run integration tests without installed Postgres DB
    • Activation with -DcayenneTestConnection=postgres-docker
    • Starts Postgres Docker container before integration test and stops if afterwards
    • Sets JDBC connection properties based on dynamic Docker container port allocation
    opened by seelmann 6
  • Ordering chaining

    Ordering chaining

    Hey all, I've got a commit (that I plan on using) that I think is useful. I've added an Orderings class which is a subclass of ArrayList, and then added methods to allow chaining of multiple Ordering through Propertys.

    So for example:

    Person.COMPANY_NAME.asc().then(Person.FIRST_NAME.desc())

    Rather than typing:

    Arrays.asList(Person.COMPANY_NAME.asc(), Person.FIRST_NAME.desc())

    You can chain together Ordering (into an Orderings) or Orderings into other Orderings.

    Person.COMPANY_NAME.asc().then(Person.LAST_NAME.asc()).then(Person.FIRST_NAME.asc())

    I find it to be convenient and cleaner, but I'm used to this kind of thing in WOnder. :)

    Thoughts?

    opened by lonvarscsak 5
  • draft for a toManyTarget setter

    draft for a toManyTarget setter

    Sadly I didn't get any feedback this time at developer mailinglist. But this shouldn't contain bug anyway.

    This pull request serves a setter method for toManyTargets, which is located in the CayenneDataObject (thank Davids advice). Others and me desire such a functionality in the out-of-the-box class gernation, look at user mailinglist.

    In difference to addToManyTarget and removeFromManyTarget this method takes a input Collection<? extends DataObject> and an optional boolean delete parameter (default is false) for deletion of DataObjects, which relationships were removed.

    The method documentation should be more comprehensible, what the method does and what not.

    The superclass.vm generates two setter methods per toMany relationship where the relationName is part of the method name. The delete ommited method sets the deletion parameter to false and calls the other method.

    I don't know, if it is a the best idea to serve a deletion parameter to prevent for orphaned DataObjects. Maybe there is a better solution. I didn't implement yet any test classes. This has still to be done.

    It would be happy if you would include my code into your project! Thanks Johannes

    opened by ghost 5
  • Experimental graph-based db operations sorter

    Experimental graph-based db operations sorter

    This is an experimental implementation of a Db operations sorter that builds a directed graph and uses topological sort to get the final order of operations. This sorter can correctly order meaningful PK intersections (i.e. delete/insert rows with the same primary keys). Moreover, it opens a possibility to automatically break cycles in DB operations.

    opened by stariy95 4
  • Cgen - Avoid class loader issues causing missing classes

    Cgen - Avoid class loader issues causing missing classes

    Cgen - Avoid class loader issues causing missing classes; don't swallow missing class errors completely

    It took me almost a full day to find where an exception was being swallowed which would have indicated the problem here. Whatever class loader is being used by Cgen (AdhocObjectFactory) is not picking up classes that are available in the default class loader - namely all the Joda classes, like org.joda.time.LocalDateTime.

    This change will make it fallback to using the default class loader if a class is not found, and will log a single line WARNing if the class isn't found by either class loader.

    opened by johnthuss 4
  • Cayenne 4.2.M1 testing

    Cayenne 4.2.M1 testing

    I'm trying out 4.2.M1 snapshot in order to provide testing feedback.

    These changes aren't really meant to be applied directly, but to provide feedback into the issues I'm having.

    1. Joda time support is lacking in the class generation templates. I understand this is deprecated now, but since it hasn't been removed yet it should still work. Thinking ahead to the future there will need to a way for third parties to integrate Joda support into cayenne without having to create a hard fork. This doesn't appear to currently be possible - at least I don't know how to provide an additional Module to cgen via ant that could provide PropertyDescriptorCreator changes.

    2. I need to have a method like PropertyUtils.propertyTypeDefinition exposed so I can utilize it in my templates. The refactoring to move logic into PropertyUtils makes the templates clean and simple, but users need access to smaller pieces as well.

    3. Using "between" expressions on Joda time DateProperty instances fails. This is a bug that still needs to be fixed. I don't know where to start on this one.

    4. Ant's cgen task was completely broken. I doubt this is the correct fix, but it does get it running again.

    opened by johnthuss 4
  • cdbmport: Add advanced filtering into cdbimport

    cdbmport: Add advanced filtering into cdbimport

    Solution description:

    1. Package "/cayenne-tools/src/main/java/org/apache/cayenne/tools/dbimport/config" is responsible for loading configuration from different sources (ant, maven, external reversengineering file) and transform(see FiltersConfigBuilder) into convenient format for DbLoader (see FiltersConfig). It is mean that FiltersConfig is a key interface in interaction between tools and server modules.

    2. FiltersConfig is a map (DbPath -> EntityFilter) DbPath = Catalog/Schema/Table EntityFilter = Table/Column/Procedure Filters

    3. DbLoader builds DbPath for each object, takes filter for this path (filter can aggregate a number of defined in config filters, see FiltersConfig.filter(DbPath)), call isInclude in order to figure out should we include this object or not. DbLoader should interact only with DbPath, FiltersConfig, EntityFilter and Filter interface.

    4. Filter interface was designed in order to encapsulate all filtering logic for each specific entity and provide flexibility extend/override this mechanism. For each db object we are calling "isInclude" once and only in one place where filtering applied. Here I have 2 key classes: Filter interface and FiltersFactory.

    opened by AlexKolonitsky 4
  • Add test for db merge

    Add test for db merge

    1. make token name a constructor parameter and remove all other implementations of this method
    2. I've created builders for domain objects (i.e. DataMap, Db\Obj Entity, Db\Obj Attribute, ...) and Factory for all builders; Main goal to provide easy way of construction objects and fill them partially with only data that necessary for test case, everything else required to make object valid will be generated randomly.
    3. DbMergerTest - test correct identification of DbToModel changes
    4. TokensReversTest - test compliance with the invariant token.reverse.reverse == token
    5. TokensToModelExecution - test application correctness of DbToModel tokens
    6. new dependency datafactory - in order to use random data in tests where exact value not really important
    opened by AlexKolonitsky 4
  • Utils for the JSON values comparison

    Utils for the JSON values comparison

    This PR contains a simple JSON parser that could be used to normalize and compare JSON values to exclude time-consuming SQL updates caused by different formatting of JSON strings.

    JSON parser is not intended for any other use-cases except for fast values comparison.

    Here are simple benchmark results comparing Cayenne's specialized implementation with some widely used libraries. Compare call in all cases is basically parsing two values + equals() call.

    Benchmark                         Mode  Cnt   Score   Error  Units
    GsonBenchmark.compare             avgt   12  89.533 ± 1.483  us/op
    GsonBenchmark.read                avgt   12  39.398 ± 0.271  us/op
    JacksonBenchmark.compare          avgt   12  80.887 ± 0.358  us/op
    JacksonBenchmark.read             avgt   12  38.201 ± 0.347  us/op
    JsonTokenizerBenchmark.compare    avgt   12  79.381 ± 0.722  us/op
    JsonTokenizerBenchmark.read       avgt   12  20.126 ± 0.114  us/op
    JsonTokenizerBenchmark.normalize  avgt   12  38.148 ± 0.554  us/op
    
    opened by stariy95 3
  • Fixed CAY-2379 by focusing the ObjEntity after the undo RemoveAttributeUndoableEdit  Action

    Fixed CAY-2379 by focusing the ObjEntity after the undo RemoveAttributeUndoableEdit Action

    Fixing CAY-2379 by focusing the ObjEntity after the undo RemoveAttributeUndoableEdit action

    • Divided and set public access to viewCounterpartEntity method in DbEntityCounterpartAction to be able to switch to the ObjEntity from RemoveAttributeUndoableEdit class.
    • Added focusObjEntity method to RemoveAttributeUndoableEdit class
    • Invoking focusObjEntity method inside undo method, in the section doing the attribute restitution for ObjEntity type
    opened by emecas 3
  • CAY-2639 DBImport and DB name case sensitivity. Feature for choosing case-sensitive naming.

    CAY-2639 DBImport and DB name case sensitivity. Feature for choosing case-sensitive naming.

    The feature was developed for work with case-sensitive naming (e.g. "name", "Name", "NAME"). Added a button for using case-sensitive naming in modeler. The useCaseSensitive option allows to have case sensitive naming (for tables, attributes etc) in database, if DB supports it, and passes its initial state. Added quoted identifier in loaders. Added asciidoc and test for case-sensitive naming. An example of naming that can be passed from the database (this is not an example of the correct naming rules but it shows program capabilities): image image Was fixed tokens creating and tests for SQLite because this database hadn't supported column modifying. https://www.sqlite.org/omitted.html

    opened by OlegKhodokevich 0
  • Fix get declared fields dependent test

    Fix get declared fields dependent test

    Description

    Test org.apache.cayenne.reflect.PojoMapperTest.testObjectCreation will fail under NonDex which detects flakiness under non-deterministic environment.

    To reproduce:

    mvn edu.illinois:nondex-maven-plugin:1.1.2:nondex \
        -pl cayenne-server \
        -Dtest=org.apache.cayenne.reflect.PojoMapperTest#testObjectCreation
    

    Issue

    In testObjectCreation, the object creation depends on the following logic:

    Field[] declaredFields = type.getDeclaredFields();
    this.setters = new MethodHandle[declaredFields.length];
    int i = 0;
    for(Field field : declaredFields) {
            ...
            setters[i++] = lookup.unreflectSetter(field);
    }
    

    However, according to the documentation of getDeclaredFields, the function does not guarantee the order of fields and may be different on different JVM.

    Simply applying sorting on the fields could guarantee correctness in the testing environment.

    opened by shunfan-shao 0
  • Fix concurrent map flakiness

    Fix concurrent map flakiness

    Description

    Test org.apache.cayenne.util.WeakValueMapTest.testConcurrentModification will fail under NonDex which detects flakiness under non-deterministic environment.

    To reproduce:

    mvn edu.illinois:nondex-maven-plugin:1.1.2:nondex \
        -pl cayenne-server \
        -Dtest=org.apache.cayenne.util.WeakValueMapTest#testConcurrentModification
    

    Issue

    The test code follows the logic below:

    Map<String, Integer> map = new WeakValueMap<>(3);
    for(Map.Entry<String, Integer> entry : map.entrySet()) {
        if("key_2".equals(entry.getKey())) 
            map.remove("key_2");
    }
    

    While WeakValueMap uses HashMap internally, it is not guaranteed for map.entrySet to iterate over the entries under some orders.

    The code is expected to thrown ConcurrentModificationException during execution. Under some edge cases, eg: key_2 is traversed and removed last, the test will fail to throw and cause an issue.

    opened by shunfan-shao 0
  • fix: replace non deterministic order data structure

    fix: replace non deterministic order data structure

    The test org.apache.cayenne.log.CompactSlf4jJdbcEventLoggerTest.compactBindings will fail under NonDex tool that detect flakiness under non-deterministic order.

    In function CompactSlf4jJdbcEventLogger .collectBindings, the function is declared with the usage of HashMap.

    protected void appendParameters(StringBuilder buffer, String label, ParameterBinding[] bindings) {
        ...
        buildBinding(buffer, label, collectBindings(bindings));
    }
    
    private Map<String, List<String>> collectBindings(ParameterBinding[] bindings) {
        Map<String, List<String>> bindingsMap = new HashMap<>();
        ...
        bindingsMap.computeIfAbsent(key, k -> new ArrayList<>());
        ...
        return bindingsMap;
    }
    

    Whereas the test relies on the result of the HashMap to determine the output, whereas toString should not be able to guarantee the order of insertion.

    logger.appendParameters(buffer, "bind", bindings);
    assertEquals(buffer.toString(), ...);
    

    I proposed to use LinkedHashMap to keep the hash property while maintaining some kind of ordering.

    opened by shunfan-shao 0
  • Refactor MockChannelListener with mocking object and improve testing logic

    Refactor MockChannelListener with mocking object and improve testing logic

    Fix CAY-2717

    Description

    Refactor test class DataContextDataChannelEventsIT.java by using Mockito.


    Motivation
    • Decoupling test class MockChannelListener from production interface DataChannelListener
    • Making test condition more clear by removing redundant overridden methods and new fields.
    • Use Mockito.verify() to directly verify the behavior of the mocking object and make test condition more explict.

    Key changed/added classes in this PR
    • Creating mocking object to replace test subclass MockChannelListener, decoupled test from production code.
    • Replacing assertation statement by using Mockito.verify() to verify the invocation status of graphChanged(GraphEvent), graphFlushed(GraphEvent) and graphRolledback(GraphEvent).
    opened by wx930910 0
Owner
The Apache Software Foundation
The Apache Software Foundation
requery - modern SQL based query & persistence for Java / Kotlin / Android

A light but powerful object mapping and SQL generator for Java/Kotlin/Android with RxJava and Java 8 support. Easily map to or create databases, perfo

requery 3.1k Jan 5, 2023
Microstream - High-Performance Java-Native-Persistence

Microstream - High-Performance Java-Native-Persistence. Store and load any Java Object Graph or Subgraphs partially, Relieved of Heavy-weight JPA. Microsecond Response Time. Ultra-High Throughput. Minimum of Latencies. Create Ultra-Fast In-Memory Database Applications & Microservices.

MicroStream 410 Dec 28, 2022
An open source SQL database designed to process time series data, faster

English | 简体中文 | العربية QuestDB QuestDB is a high-performance, open-source SQL database for applications in financial services, IoT, machine learning

QuestDB 9.9k Jan 1, 2023
Apache Aurora - A Mesos framework for long-running services, cron jobs, and ad-hoc jobs

NOTE: The Apache Aurora project has been moved into the Apache Attic. A fork led by members of the former Project Management Committee (PMC) can be fo

The Apache Software Foundation 627 Nov 28, 2022
Main Liquibase Source

Liquibase Liquibase helps millions of teams track, version, and deploy database schema changes. This repository contains the main source code for Liqu

Liquibase 3.6k Dec 31, 2022
Event capture and querying framework for Java

Eventsourcing for Java Enabling plurality and evolution of domain models Instead of mutating data in a database, Eventsourcing stores all changes (eve

Eventsourcing, Inc. 408 Nov 5, 2022
Multi-DBMS SQL Benchmarking Framework via JDBC

BenchBase BenchBase (formerly OLTPBench) is a Multi-DBMS SQL Benchmarking Framework via JDBC. Table of Contents Quickstart Description Usage Guide Con

CMU Database Group 213 Dec 29, 2022
LSPatch: A non-root Xposed framework fork from Xpatch

Introduction LSPatch fork from Xpatch. LSPatch provides a way to insert dex and so into the target APK by repackaging. The following changes have been

LSPosed 1.9k Jan 2, 2023
Ja-netfilter - A javaagent framework

ja-netfilter v2.0.1 A javaagent framework Usage download from the releases page add -javaagent:/absolute/path/to/ja-netfilter.jar argument (Change to

null 7.3k May 26, 2022
Reladomo is an enterprise grade object-relational mapping framework for Java.

Reladomo What is it? Reladomo is an object-relational mapping (ORM) framework for Java with the following enterprise features: Strongly typed compile-

Goldman Sachs 360 Nov 2, 2022
A javaagent framework

ja-netfilter 2022.2.0 A javaagent framework Usage download from the releases page add -javaagent:/absolute/path/to/ja-netfilter.jar argument (Change t

null 35 Jan 2, 2023
Apache Calcite

Apache Calcite Apache Calcite is a dynamic data management framework. It contains many of the pieces that comprise a typical database management syste

The Apache Software Foundation 3.6k Dec 31, 2022
Apache Druid: a high performance real-time analytics database.

Website | Documentation | Developer Mailing List | User Mailing List | Slack | Twitter | Download Apache Druid Druid is a high performance real-time a

The Apache Software Foundation 12.3k Jan 1, 2023
Apache Hive

Apache Hive (TM) The Apache Hive (TM) data warehouse software facilitates reading, writing, and managing large datasets residing in distributed storag

The Apache Software Foundation 4.6k Dec 28, 2022
The Chronix Server implementation that is based on Apache Solr.

Chronix Server The Chronix Server is an implementation of the Chronix API that stores time series in Apache Solr. Chronix uses several techniques to o

Chronix 262 Jul 3, 2022
Apache Pinot - A realtime distributed OLAP datastore

What is Apache Pinot? Features When should I use Pinot? Building Pinot Deploying Pinot to Kubernetes Join the Community Documentation License What is

The Apache Software Foundation 4.4k Dec 30, 2022
Apache Ant is a Java-based build tool.

Apache Ant What is it? ----------- Ant is a Java based build tool. In theory it is kind of like "make" without makes wrinkles and with

The Apache Software Foundation 355 Dec 22, 2022
Apache Drill is a distributed MPP query layer for self describing data

Apache Drill Apache Drill is a distributed MPP query layer that supports SQL and alternative query languages against NoSQL and Hadoop data storage sys

The Apache Software Foundation 1.8k Jan 7, 2023