JDBC driver for ClickHouse

Overview

ClickHouse JDBC driver

clickhouse-jdbc Build Status(https://github.com/ClickHouse/clickhouse-jdbc/workflows/Build/badge.svg) Coverage

This is a basic and restricted implementation of jdbc driver for ClickHouse. It has support of a minimal subset of features to be usable.

Usage

<dependency>
    <groupId>ru.yandex.clickhousegroupId>
    <artifactId>clickhouse-jdbcartifactId>
    <version>0.3.1version>
dependency>

URL syntax: jdbc:clickhouse://:[/], e.g. jdbc:clickhouse://localhost:8123/test

JDBC Driver Class: ru.yandex.clickhouse.ClickHouseDriver

For example:

String url = "jdbc:clickhouse://localhost:8123/test";
ClickHouseProperties properties = new ClickHouseProperties();
// set connection options - see more defined in ClickHouseConnectionSettings
properties.setClientName("Agent #1");
...
// set default request options - more in ClickHouseQueryParam
properties.setSessionId("default-session-id");
...

ClickHouseDataSource dataSource = new ClickHouseDataSource(url, properties)
String sql = "select * from mytable";
Map<ClickHouseQueryParam, String> additionalDBParams = new HashMap<>();
// set request options, which will override the default ones in ClickHouseProperties
additionalDBParams.put(ClickHouseQueryParam.SESSION_ID, "new-session-id");
...
try (ClickHouseConnection conn = dataSource.getConnection();
    ClickHouseStatement stmt = conn.createStatement();
    ResultSet rs = stmt.executeQuery(sql, additionalDBParams)) {
    ...
}

Additionally, if you have a few instances, you can use BalancedClickhouseDataSource.

Extended API

In order to provide non-JDBC complaint data manipulation functionality, proprietary API exists. Entry point for API is ClickHouseStatement#write() method.

Importing file into table

import ru.yandex.clickhouse.ClickHouseStatement;
ClickHouseStatement sth = connection.createStatement();
sth
    .write() // Write API entrypoint
    .table("default.my_table") // where to write data
    .option("format_csv_delimiter", ";") // specific param
    .data(new File("/path/to/file.csv.gz"), ClickHouseFormat.CSV, ClickHouseCompression.gzip) // specify input     
    .send();

Configurable send

import ru.yandex.clickhouse.ClickHouseStatement;
ClickHouseStatement sth = connection.createStatement();
sth
    .write()
    .sql("INSERT INTO default.my_table (a,b,c)")
    .data(new MyCustomInputStream(), ClickHouseFormat.JSONEachRow)
    .dataCompression(ClickHouseCompression.brotli)    
    .addDbParam(ClickHouseQueryParam.MAX_PARALLEL_REPLICAS, 2)
    .send();

Send data in binary formatted with custom user callback

import ru.yandex.clickhouse.ClickHouseStatement;
ClickHouseStatement sth = connection.createStatement();
sth.write().send("INSERT INTO test.writer", new ClickHouseStreamCallback() {
    @Override
    public void writeTo(ClickHouseRowBinaryStream stream) throws IOException {
        for (int i = 0; i < 10; i++) {
            stream.writeInt32(i);
            stream.writeString("Name " + i);
        }
    }
},
ClickHouseFormat.RowBinary); // RowBinary or Native are supported

Compiling with maven

The driver is built with maven. mvn package -DskipTests=true

To build a jar with dependencies use

mvn package assembly:single -DskipTests=true

Build requirements

In order to build the jdbc client one need to have jdk 1.7 or higher.

Comments
  • Treating the  Empty String as NULL since  0.3.2-patch6

    Treating the Empty String as NULL since 0.3.2-patch6

    I can reproduce this misbehaviour since version 0.3.2-patch6.

    Up to version 0.3.2-patch5 everything worked fine.

    I poked around an bit and found out that the jdbc-driver has problems with empty strings. It seems that it handles empty strings like null values.

    Before patch6 everything worked fine. This behavior still exists in patch9

    This relates to #896 Unexpected error inserting/updating row in part [insertRow add batch] ... com.clickhouse.client.data.BinaryStreamUtils.writeString (BinaryStreamUtils.java:1667)

    If is replace the empty string value with any character the error is gone.

    bug 
    opened by mafiore 28
  • Is there a rich sql management studio that works with clickhouse-jdbc?

    Is there a rich sql management studio that works with clickhouse-jdbc?

    Hi all,

    Is there a rich sql management studio that works with Clickhouse via clickhouse-jdbc correctly? I've tried DBVisualizer, and SQLWorkbench, they cannot create and alter tables, and even cannot show table data, wrong syntax 🚆

    screen shot 2016-09-11 at 17 34 26
    opened by umaxfun 25
  • Problems with huge Int data types (Tableau)

    Problems with huge Int data types (Tableau)

    Hi! Our team is developing a Tableau connector to ClickHouse based on this JDBC driver. During testing, we encountered problems when working with data types UInt64, UInt256, Int128, Int256:

    • In the case of UInt64, we get an error parsing a number as Long 1951F9AA-1E93-43F0-A775-144FF03CE904 It looks like problem in lines 171-173 of ClickHouseRowBinaryStream.java

    • In the case of the other listed data types, the number is truncated 6B3E2AEC-863A-4342-B0E3-142465AA0A75 FB4F196A-C1DE-4066-8502-C10480F1B693 Looks like problems with declared precision in ClickHouseDataType.java

    Is this a bug or a feature?

    opened by artshevchenko 20
  • Pentaho Spoon Problem when I try to connect clickhouse table.

    Pentaho Spoon Problem when I try to connect clickhouse table.

    I copied the JDBC folder into the /data-integration/lib and I build on clickhouse database on the generic database. Still, I am faced with a problem which is my spoon freezing when I try to upload the clickhouse table. clickhouseproblem

    opened by deniztek 19
  • [HELP] 0.3.2 pre-release for public testing

    [HELP] 0.3.2 pre-release for public testing

    Background

    v0.3.2 was a minor release scheduled to be released months ago, but now it's a complete rewrite mainly for two reasons:

    1. decoupling(see #570 for details)

    2. switching data format to RowBinary to fix issues and improve performance Benchmark results...

      0.3.2-test1...
      • clickhouse-grpc-jdbc and clickhouse-http-jdbc are new JDBC driver(0.3.2) using RowBinary data format
      • clickhouse-jdbc is the old JDBC driver(0.3.1-patch) based on TabSeparated
      • clickhouse-native-jdbc is ClickHouse-Native-JDBC 2.6.0 Benchmark settings: thread=1, sampleSize=100000, fetchSize=10000, mode=throughput(ops/s). image
      0.3.2-test3...

      Unlike previous round of testing, ClickHouse container is re-created a few minutes before benchmarking each driver.

      • Single thread
        • Comparison image Note: HttpClient is async(uses more than one thread in runtime); gRPC uses gzip(why?) which is slower than lz4.
        • VM utilization image Note: on client side, the new driver consumes less memory and CPU than others, BUT higher CPU on server side(due to overhead of http protocol?).
      • 4 threads
        • Comparison image

        • VM utilization image

      0.3.2...

      image

      Query performance is similar as shown in 0.3.2-test3 so this time we only focus on insertion. image Note: gRPC does not support LZ4 compression so we use GZIP in the test.

      • Single thread image
      • 4 threads image

    0.3.2-test1, 0.3.2-test2, and 0.3.2-test3 are pre-release for public testing.

    Downloads

    Maven dependency:

    <dependency>
        <!-- will stop using group id "ru.yandex.clickhouse" starting from 0.4.0  -->
        <groupId>com.clickhouse</groupId>
        <!-- or clickhouse-grpc-client to use gRPC client  -->
        <artifactId>clickhouse-http-client</artifactId>
        <version>0.3.2-test3</version>
    </dependency>
    

    To download JDBC drivers:

    | Package | Size | Legacy | New | HTTP | gRPC | Remark | | -------- | ----- | -------- | ---- | ------ | ----- | -------- | | clickhouse-jdbc-0.3.2-all.jar | 18.6MB | Y | Y | Y | Y | Both old and new JDBC drivers(besides netty, okhttp is included as well) | | clickhouse-jdbc-0.3.2-http.jar | 756KB | N | Y | Y | N | New JDBC driver with only http support | | clickhouse-jdbc-0.3.2-grpc.jar | 17.3MB | N | Y | N | Y | New JDBC driver with only grpc support(only netty, okhttp is excluded) | | clickhouse-jdbc-0.3.2-shaded.jar | 2.8MB | Y | Y | Y | N | Both old and new JDBC drivers |

    Note: the first two are recommended. grpc is experimental so you'd better use http.

    Known Issues

    • new driver(com.clickhouse.jdbc.ClickHouseDriver) does not work with version before 21.3
    • java.io.IOException: HTTP/1.1 header parser received no bytes when using JDK 11+ and http_connection_provider is set to HTTP_CLIENT
    • RESOURCE_EXHAUSTED: Compressed gRPC message exceeds maximum size - increase max_inbound_message_size to resolve
    • select 1 format JSON works in http but not grpc, because grpc client is not aware of response format
    • insert into table values(?, ?) is slow in batch mode - try insert into table select c2,c3 from input('c1 String, c2 UInt8, c3 Nullable(UInt32)') instead
    • use_time_zone and use_server_time_zone_for_dates properties do not work
    • no table/index show up under jdbc(*) database
    • roaringbitmap is not included in the shaded jar

    Key Changes

    • Java client and JDBC driver are now in different modules, along with JPMS support
    • Replaced data format from TabSeparated to RowBinary
    • Support more data types including Date32, Geo types, and mixed use of nested types
    • JDBC connection URL now supports abbrebation, protocol and optional port
      • jdbc:ch://localhost is same as jdbc:clickhouse:http://localhost:8123
      • jdbc:ch:grpc://localhost/db is same as jdbc:clickhouse:grpc://localhost:9100/db
    • New JDBC driver class is com.clickhouse.jdbc.ClickHouseDriver(will remove ru.yandex.clickhouse.ClickHouseDriver starting from 0.4.0)
    • JDBC connection properties are simplified
      • use custom_http_headers and custom_http_params for customization - won't work for grpc client
      • jdbcCompliant(defaults to true) to support fake transaction and standard synchronous UPDATE and DELETE statements
      • typeMappings to customize type mapping(e.g. DateTime=java.lang.String,DateTime32=java.lang.String)

    Some more details can be found at #736, #747, #769, and #777.

    help wanted 
    opened by zhicwu 17
  • column type is Array(Nullable(Int64))  But Null becomes 0 in the result returned

    column type is Array(Nullable(Int64)) But Null becomes 0 in the result returned

    resultSet: [3553669559,3571293165,3851086538,3090702645,3407263362] [3874870412,3451171182,3922428704,3228944982,3590715072] [NULL,3989698365,3521029982,NULL,NULL] [NULL] [3411652369,3144762202,3230150923,3581921969,3306736155]

    getArray: {0,3989698365,3521029982,0,0}

    my code: while (resultSet.next()) { Array array = resultSet.getArray(1); Object array1 = array.getArray(); }

    version 0.2.3

    bug 
    opened by liumaojing 16
  • Prepare 0.3.2

    Prepare 0.3.2

    Finish open items in #777 and release 0.3.2.

    • [x] add warnings
      • legacy driver will be removed in 0.4.0
      • new driver does not support version older than 21.3
    • [x] update docs(supported data types, compatibility matrix, and examples)
    • [x] fix timezone issue(use_time_zone and use_server_time_zone_for_date option work now)
    • [x] fix most parser issues(not case when and performance) - allow disable parsing in JDBC driver in 0.3.3
    • [x] enhance prepared statement for below two use cases
      • insert into ... values
      • insert into ... format ...
    • [x] throw exception for AggregateFunction types - needs to implement type system to deserialize/serialize different type of states in 0.3.3
    • [x] update clickhouse-benchmark to test insertion
    opened by zhicwu 16
  • Format Timestamp for DateTime64 Parameter with millisecond precision not working

    Format Timestamp for DateTime64 Parameter with millisecond precision not working

    When setting a Timestamp for a DateTime64 parameter with millisecond precision on a prepared statement the milliseconds are lost when inserting.

    This issue occurs for clickhouse-jdbc version 0.3.0

    Example:

    Create the following table

    CREATE TABLE IF NOT EXISTS test (
        date DateTime64(3),
        value Decimal64(8)
    

    And then insert a record with a Timestamp with millisecond precision:

      ClickHouseConnection conn = getConnection();
      PreparedStatement stmt = conn.prepareStatement("INSERT INTO test (date, value) VALUES (?, ?);");
      long date = 1617028535604L; // Mon Mar 29 2021 14:35:35.604
      stmt.setTimestamp(4, new Timestamp(date));
      stmt.executeUpdate();
      stmt.close();
      conn.close();
    

    When selecting the inserted record you will see that milliseconds are lost, the DateTime64 returned by ClickHouse is then 2021-03-29 16:35:35.000 but it should be 2021-03-29 16:35:35.604.

    I have pinpointed the problem to the formatTimestamp function

    You can see with the following simple example:

        public static void main(String[] args) {
            long date = 1617028535604L; // Mon Mar 29 2021 14:35:35.604
            Timestamp ts = new Timestamp(date);
            String formatted = ClickHouseValueFormatter.formatTimestamp(ts, TimeZone.getTimeZone("UTC"));
            System.out.println("SQL Timestamp: " + ts);
            System.out.println("ClickHouse JDBC Formatted Timestamp: " + formatted);
        }
    

    This will output:

    SQL Timestamp: 2021-03-29 16:35:35.604
    ClickHouse JDBC Formatted Timestamp: 2021-03-29 14:35:35
    

    But the ClickHouse JDBC Formatted Timestamp should be 2021-03-29 14:35:35.604

    bug module-jdbc 
    opened by PHameete 16
  • What are the plans to support Tableau?

    What are the plans to support Tableau?

    Is there any plan to support Tableau in our JDBC? Yadex seems to be focusing on Tableau support, but I couldn't find any comments on JDBC, so I've write them here.

    enhancement 
    opened by Keiichi-Hirano 16
  • Pentaho PDI use JDBC to ClickHouse,Could not initialize class ru.yandex.clickhouse.response.ClickHouseLZ4Stream

    Pentaho PDI use JDBC to ClickHouse,Could not initialize class ru.yandex.clickhouse.response.ClickHouseLZ4Stream

    Error connecting to database [ClickHouse] :org.pentaho.di.core.exception.KettleDatabaseException: Error occurred while trying to connect to the database

    Error connecting to database: (using class ru.yandex.clickhouse.ClickHouseDriver) Could not initialize class ru.yandex.clickhouse.response.ClickHouseLZ4Stream

    org.pentaho.di.core.exception.KettleDatabaseException: Error occurred while trying to connect to the database

    Error connecting to database: (using class ru.yandex.clickhouse.ClickHouseDriver) Could not initialize class ru.yandex.clickhouse.response.ClickHouseLZ4Stream

    at org.pentaho.di.core.database.Database.normalConnect(Database.java:472)
    at org.pentaho.di.core.database.Database.connect(Database.java:370)
    at org.pentaho.di.core.database.Database.connect(Database.java:341)
    at org.pentaho.di.core.database.Database.connect(Database.java:331)
    at org.pentaho.di.core.database.DatabaseFactory.getConnectionTestReport(DatabaseFactory.java:80)
    at org.pentaho.di.core.database.DatabaseMeta.testConnection(DatabaseMeta.java:2786)
    at org.pentaho.ui.database.event.DataHandler.testDatabaseConnection(DataHandler.java:619)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:498)
    at org.pentaho.ui.xul.impl.AbstractXulDomContainer.invoke(AbstractXulDomContainer.java:313)
    at org.pentaho.ui.xul.impl.AbstractXulComponent.invoke(AbstractXulComponent.java:157)
    at org.pentaho.ui.xul.impl.AbstractXulComponent.invoke(AbstractXulComponent.java:141)
    at org.pentaho.ui.xul.swt.tags.SwtButton.access$500(SwtButton.java:43)
    at org.pentaho.ui.xul.swt.tags.SwtButton$4.widgetSelected(SwtButton.java:137)
    at org.eclipse.swt.widgets.TypedListener.handleEvent(Unknown Source)
    at org.eclipse.swt.widgets.EventTable.sendEvent(Unknown Source)
    at org.eclipse.swt.widgets.Display.sendEvent(Unknown Source)
    at org.eclipse.swt.widgets.Widget.sendEvent(Unknown Source)
    at org.eclipse.swt.widgets.Display.runDeferredEvents(Unknown Source)
    at org.eclipse.swt.widgets.Display.readAndDispatch(Unknown Source)
    at org.eclipse.jface.window.Window.runEventLoop(Window.java:820)
    at org.eclipse.jface.window.Window.open(Window.java:796)
    at org.pentaho.di.ui.xul.KettleDialog.show(KettleDialog.java:80)
    at org.pentaho.di.ui.xul.KettleDialog.show(KettleDialog.java:47)
    at org.pentaho.di.ui.core.database.dialog.XulDatabaseDialog.open(XulDatabaseDialog.java:118)
    at org.pentaho.di.ui.core.database.dialog.DatabaseDialog.open(DatabaseDialog.java:60)
    at org.pentaho.di.ui.spoon.delegates.SpoonDBDelegate.editConnection(SpoonDBDelegate.java:95)
    at org.pentaho.di.ui.spoon.Spoon.editConnection(Spoon.java:2787)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:498)
    at org.pentaho.ui.xul.impl.AbstractXulDomContainer.invoke(AbstractXulDomContainer.java:313)
    at org.pentaho.ui.xul.impl.AbstractXulComponent.invoke(AbstractXulComponent.java:157)
    at org.pentaho.ui.xul.impl.AbstractXulComponent.invoke(AbstractXulComponent.java:141)
    at org.pentaho.ui.xul.jface.tags.JfaceMenuitem.access$100(JfaceMenuitem.java:43)
    at org.pentaho.ui.xul.jface.tags.JfaceMenuitem$1.run(JfaceMenuitem.java:106)
    at org.eclipse.jface.action.Action.runWithEvent(Action.java:498)
    at org.eclipse.jface.action.ActionContributionItem.handleWidgetSelection(ActionContributionItem.java:545)
    at org.eclipse.jface.action.ActionContributionItem.access$2(ActionContributionItem.java:490)
    at org.eclipse.jface.action.ActionContributionItem$5.handleEvent(ActionContributionItem.java:402)
    at org.eclipse.swt.widgets.EventTable.sendEvent(Unknown Source)
    at org.eclipse.swt.widgets.Display.sendEvent(Unknown Source)
    at org.eclipse.swt.widgets.Widget.sendEvent(Unknown Source)
    at org.eclipse.swt.widgets.Display.runDeferredEvents(Unknown Source)
    at org.eclipse.swt.widgets.Display.readAndDispatch(Unknown Source)
    at org.pentaho.di.ui.spoon.Spoon.readAndDispatch(Spoon.java:1376)
    at org.pentaho.di.ui.spoon.Spoon.waitForDispose(Spoon.java:8161)
    at org.pentaho.di.ui.spoon.Spoon.start(Spoon.java:9523)
    at org.pentaho.di.ui.spoon.Spoon.main(Spoon.java:702)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:498)
    at org.pentaho.commons.launcher.Launcher.main(Launcher.java:92)
    

    Caused by: org.pentaho.di.core.exception.KettleDatabaseException: Error connecting to database: (using class ru.yandex.clickhouse.ClickHouseDriver) Could not initialize class ru.yandex.clickhouse.response.ClickHouseLZ4Stream

    at org.pentaho.di.core.database.Database.connectUsingClass(Database.java:585)
    at org.pentaho.di.core.database.Database.normalConnect(Database.java:456)
    ... 56 more
    

    Caused by: java.lang.NoClassDefFoundError: Could not initialize class ru.yandex.clickhouse.response.ClickHouseLZ4Stream at ru.yandex.clickhouse.ClickHouseStatementImpl.checkForErrorAndThrow(ClickHouseStatementImpl.java:728) at ru.yandex.clickhouse.ClickHouseStatementImpl.getInputStream(ClickHouseStatementImpl.java:551) at ru.yandex.clickhouse.ClickHouseStatementImpl.executeQuery(ClickHouseStatementImpl.java:114) at ru.yandex.clickhouse.ClickHouseStatementImpl.executeQuery(ClickHouseStatementImpl.java:97) at ru.yandex.clickhouse.ClickHouseStatementImpl.executeQuery(ClickHouseStatementImpl.java:92) at ru.yandex.clickhouse.ClickHouseStatementImpl.executeQuery(ClickHouseStatementImpl.java:88) at ru.yandex.clickhouse.ClickHouseConnectionImpl.initTimeZone(ClickHouseConnectionImpl.java:86) at ru.yandex.clickhouse.ClickHouseConnectionImpl.(ClickHouseConnectionImpl.java:75) at ru.yandex.clickhouse.ClickHouseDriver.connect(ClickHouseDriver.java:58) at ru.yandex.clickhouse.ClickHouseDriver.connect(ClickHouseDriver.java:50) at ru.yandex.clickhouse.ClickHouseDriver.connect(ClickHouseDriver.java:32) at java.sql.DriverManager.getConnection(DriverManager.java:664) at java.sql.DriverManager.getConnection(DriverManager.java:208) at org.pentaho.di.core.database.Database.connectUsingClass(Database.java:567) ... 57 more

    Custom URL :jdbc:clickhouse://IP:9000/default Custom Driver Class:ru.yandex.clickhouse.ClickHouseDriver

    bug 
    opened by irislips 15
  • Roadmap 2021

    Roadmap 2021

    ~~0.2.x~~

    Focus on bug fixes, small enhancements, and backward compatibility...
    • ~~0.2.5~~
      • [x] ~~switch to github actions for consistency~~
      • [x] ~~use testcontainer for integration test~~
    • ~~0.2.6~~
      • [x] ~~enable retry for idempotent queries(as workaround of host failed to respond)~~
      • [x] ~~new sql parser~~
      • [x] ~~use basic auth instead of query parameters for authentication~~
    • 0.2.7 - TBD(in case any critical issue)

    0.3.x

    Focus on new features, code clean up, and abstraction which may break existing interfaces/APIs...

    Previous releases...
    • ~~0.3.0~~
      • [x] ~~BREAKING CHANGE: drop JDK7 support~~
      • [x] ~~BREAKING CHANGE: remove Guava dependency (UnsignedLong is removed, please use long(faster) or BigInteger(slower) instead for UInt64)~~ ~~Note: shaded jar is now ~3.65MB(was 7.19MB in 0.2.6, and 5.68MB in 0.2.4).~~
      • [x] ~~JDBC 4.2 support~~
      • [x] ~~more data types (including aliases) like IPv4, IPv6, DateTime64, *Int128, *Int256, Decimal256 and Map~~ ~~Note: UInt128 will be supported soon on server side.~~
      • [x] ~~RoaringBitmap support - please use latest RoaringBitmap~~
      • [x] ~~restructure code (clickhouse-jdbc for JDBC compliance, and clickhouse-*client for efficiency and consistent behaviors like any other clickhouse client, see #570)~~
      • [x] ~~performance test (clickhouse-jdbc vs. clickhouse4j vs. clickhouse-native-jdbc vs. mariadb-java-client)~~
      • [x] ~~CI enhancement: checkstyle, spellcheck & SonarCloud~~
    • ~~0.3.1~~
      • [x] ~~BREAKING CHANGE: remove deprecated stuff Note: will also drop fallback of SQL parsing~~
      • [x] ~~BREAKING CHANGE: exclude roaringbitmap in uber jar and remove jitpack.io maven repository - see #603~~
      • [x] ~~multi-statement support - only return the last result~~
    Ongoing releases...
    • 0.3.2
      • [x] JPMS support along with multi-release jars
         19M	target/clickhouse-jdbc-0.3.2-SNAPSHOT-all.jar
         18M	target/clickhouse-jdbc-0.3.2-SNAPSHOT-grpc.jar
        664K	target/clickhouse-jdbc-0.3.2-SNAPSHOT-http.jar
        960K	target/clickhouse-jdbc-0.3.2-SNAPSHOT-javadoc.jar
        2.7M	target/clickhouse-jdbc-0.3.2-SNAPSHOT-shaded.jar
        428K	target/clickhouse-jdbc-0.3.2-SNAPSHOT.jar
        
      • [x] introduce abstract module clickhouse-client, experimental clickhouse-grpc-client, and HttpURLConnection-based clickhouse-http-client
      • [x] named parameter support(only available in clickhouse-client)
      • [x] support RowBinary* format and more data types(Geo types, Date32, Tuple, Nested, mixed use of Array/Tuple/Map etc.)
      • [x] new JDBC driver(com.clickhouse.jdbc.ClickHouseDriver) built on top of clickhouse-client Note: both old and new drivers will co-exist in 0.3.x series and the old one will be removed starting from 0.4.
      • [x] show schema of remote datasources(when JDBC bridge is available)
      • [x] fix timezone and DateTime64 related issues
      • [x] adaptive integration test against local testcontainer or a remote server, and categorize cases under different groups
      • [x] replace jackson-databind and jackson-core by gson
      • [x] enhance benchmarks to cover most JDBC drivers and data types
      • [x] alternative implementation for http(s) protocol(JDK 11 HttpClient)
    opened by zhicwu 14
  • Add version info to `client_name`

    Add version info to `client_name`

    Usage

    As CH platform maintainer we want to check whether users use new version JDBC driver.

    How to avhive

    Add version info to client_name and it will appear in system.query_log.

    @zhicwu what is your opinion?

    enhancement 
    opened by JackyWoo 1
  • A bug occurs when jdbc writes to array (byte)

    A bug occurs when jdbc writes to array (byte)

    I can successfully write binary data to clickhouse using statement.setBytes through jdbc when the clickhouse field type is string. But now I have a string of type array that fails to be written when I use the method statement.setArray(1, conn.createArrayOf("Array(Byte)", photosF)). The value written to the clickhouse array is the java array address, such as ['[B@4044e2d6']. I get the same result using statement.setArray(1, conn.createArrayOf("String", photosF)), photosF is a two-dimensional array, byte[][]

    opened by Sivannnnnn 5
  • when using json,the

    when using json,the "slash" character views as messy code(保存的/取出后变成了乱码)

    I save json object into the clickhouse.It contains some "slash" character.When I query the content,the "slash" become some messy code. It seems the "com.clickhouse.client.data.tsv.ByteFragment" class problem. private static final byte[] convert = { -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, // 0.. 9 -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, // 10..19 -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, // 20..29 -1, -1, -1, -1, -1, -1, -1, -1, -1, 39, // 30..39 -1, -1, -1, -1, -1, -1, -1, -1, 0, -1, // 40..49 -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, // 50..59 -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, // 60..69 -1, -1, -1, -1, -1, -1, -1, -1, 0, -1, // 70..79 -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, // 80..89 -1, -1, 92, -1, -1, -1, -1, -1, 8, -1, // 90..99 -1, -1, 12, -1, -1, -1, -1, -1, -1, -1, // 100..109 10, -1, -1, -1, 13, -1, 9, -1, -1, -1, // 110..119 -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, // 120..129 -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, }; The array didn't contain the 'slash' convert character.

    没有做/的转义处理,导致了乱码

    opened by lorkingsky 1
  • Illegal Parquet type: INT64 (TIMESTAMP(NANOS,true))

    Illegal Parquet type: INT64 (TIMESTAMP(NANOS,true))

    Getting below error when reading parquet file.

    aused by: org.apache.spark.sql.AnalysisException: Illegal Parquet type: INT64 (TIMESTAMP(NANOS,true)) at org.apache.spark.sql.errors.QueryCompilationErrors$.illegalParquetTypeError(QueryCompilationErrors.scala:1328) at org.apache.spark.sql.execution.datasources.parquet.ParquetToSparkSchemaConverter.illegalType$1(ParquetSchemaConverter.scala:178) at org.apache.spark.sql.execution.datasources.parquet.ParquetToSparkSchemaConverter.$anonfun$convertPrimitiveField$2(ParquetSchemaConverter.scala:247) at scala.Option.getOrElse(Option.scala:189) at org.apache.spark.sql.execution.datasources.parquet.ParquetToSparkSchemaConverter.convertPrimitiveField(ParquetSchemaConverter.scala:196) at org.apache.spark.sql.execution.datasources.parquet.ParquetToSparkSchemaConverter.convertField(ParquetSchemaConverter.scala:160) at org.apache.spark.sql.execution.datasources.parquet.ParquetToSparkSchemaConverter.$anonfun$convertInternal$1(ParquetSchemaConverter.scala:124) at org.apache.spark.sql.execution.datasources.parquet.ParquetToSparkSchemaConverter.$anonfun$convertInternal$1$adapted(ParquetSchemaConverter.scala:94) at scala.collection.TraversableLike.$anonfun$map$1(TraversableLike.scala:286) at scala.collection.immutable.Range.foreach(Range.scala:158) at scala.collection.TraversableLike.map(TraversableLike.scala:286) at scala.collection.TraversableLike.map$(TraversableLike.scala:279) at scala.collection.AbstractTraversable.map(Traversable.scala:108) at org.apache.spark.sql.execution.datasources.parquet.ParquetToSparkSchemaConverter.convertInternal(ParquetSchemaConverter.scala:94) at org.apache.spark.sql.execution.datasources.parquet.ParquetToSparkSchemaConverter.$anonfun$convertGroupField$1(ParquetSchemaConverter.scala:287) at scala.Option.fold(Option.scala:251) at org.apache.spark.sql.execution.datasources.parquet.ParquetToSparkSchemaConverter.convertGroupField(ParquetSchemaConverter.scala:287) at org.apache.spark.sql.execution.datasources.parquet.ParquetToSparkSchemaConverter.convertField(ParquetSchemaConverter.scala:161) at org.apache.spark.sql.execution.datasources.parquet.ParquetToSparkSchemaConverter.$anonfun$convertGroupField$3(ParquetSchemaConverter.scala:359) at scala.Option.fold(Option.scala:251) at org.apache.spark.sql.execution.datasources.parquet.ParquetToSparkSchemaConverter.convertGroupField(ParquetSchemaConverter.scala:287) at org.apache.spark.sql.execution.datasources.parquet.ParquetToSparkSchemaConverter.convertField(ParquetSchemaConverter.scala:161) at org.apache.spark.sql.execution.datasources.parquet.ParquetToSparkSchemaConverter.$anonfun$convertInternal$1(ParquetSchemaConverter.scala:124) at org.apache.spark.sql.execution.datasources.parquet.ParquetToSparkSchemaConverter.$anonfun$convertInternal$1$adapted(ParquetSchemaConverter.scala:94) at scala.collection.TraversableLike.$anonfun$map$1(TraversableLike.scala:286) at scala.collection.immutable.Range.foreach(Range.scala:158) at scala.collection.TraversableLike.map(TraversableLike.scala:286) at scala.collection.TraversableLike.map$(TraversableLike.scala:279) at scala.collection.AbstractTraversable.map(Traversable.scala:108) at org.apache.spark.sql.execution.datasources.parquet.ParquetToSparkSchemaConverter.convertInternal(ParquetSchemaConverter.scala:94) at org.apache.spark.sql.execution.datasources.parquet.ParquetToSparkSchemaConverter.convert(ParquetSchemaConverter.scala:70) at org.apache.spark.sql.execution.datasources.parquet.ParquetFileFormat$.$anonfun$readSchemaFromFooter$2(ParquetFileFormat.scala:780) at scala.Option.getOrElse(Option.scala:189) at org.apache.spark.sql.execution.datasources.parquet.ParquetFileFormat$.readSchemaFromFooter(ParquetFileFormat.scala:780) ... 28 more

    question 
    opened by shishir-kr92 1
  • USE statement does nothing

    USE statement does nothing

    driver version: 0.3.2-patch11
    ClickHouse: 21.12.4.1
    

    New implementation of driver does not change the schema stored inside the connection, so the statements after USE are executed in schema that was originally configured in the settings

    Example:

    use system;
    select currentDatabase();
    use default;
    select currentDatabase();
    
    bug 
    opened by kassak 1
Releases(v0.3.2-patch11)
ClickHouse AST Parser & Visitor

ClickHouse AST Parser, it is much more than a parser. It is a convenient toolbox that provides services related to ClickHouse AST.

Jiaming Mai 66 Nov 10, 2022
Clickhouse storage backend for Janusgraph

Clickhouse storage backend for Janusgraph Overview Clickhouse implementation of Janusgraph storage backend. Features New version 0.6.1 of JanusGraph c

null 3 Nov 30, 2022
R2DBC Driver for Oracle Database

About Oracle R2DBC The Oracle R2DBC Driver is a Java library that supports reactive programming with Oracle Database. Oracle R2DBC implements the R2DB

Oracle 159 Dec 13, 2022
Java & Kotlin Async DataBase Driver for MySQL and PostgreSQL written in Kotlin

jasync-sql is a Simple, Netty based, asynchronous, performant and reliable database drivers for PostgreSQL and MySQL written in Kotlin. Show your ❤ wi

null 1.5k Dec 31, 2022
光 HikariCP・A solid, high-performance, JDBC connection pool at last.

HikariCP It's Faster.Hi·ka·ri [hi·ka·'lē] (Origin: Japanese): light; ray. Fast, simple, reliable. HikariCP is a "zero-overhead" production ready JDBC

Brett Wooldridge 17.7k Jan 1, 2023
Vibur DBCP - concurrent and dynamic JDBC connection pool

Vibur DBCP is concurrent, fast, and fully-featured JDBC connection pool, which provides advanced performance monitoring capabilities, including slow S

Vibur 94 Apr 20, 2022
Multi-DBMS SQL Benchmarking Framework via JDBC

BenchBase BenchBase (formerly OLTPBench) is a Multi-DBMS SQL Benchmarking Framework via JDBC. Table of Contents Quickstart Description Usage Guide Con

CMU Database Group 213 Dec 29, 2022
Provides many useful CRUD, Pagination, Sorting operations with Thread-safe Singleton support through the native JDBC API.

BangMapleJDBCRepository Inspired by the JpaRepository of Spring framework which also provides many capabilities for the CRUD, Pagination and Sorting o

Ngô Nguyên Bằng 5 Apr 7, 2022
Core ORMLite functionality that provides a lite Java ORM in conjunction with ormlite-jdbc or ormlite-android

ORMLite Core This package provides the core functionality for the JDBC and Android packages. Users that are connecting to SQL databases via JDBC shoul

Gray 547 Dec 25, 2022
SPRING MySQL Database Connection using JDBC STEPS

SPRING-MySQL-Database-Connection-using-JDBC-STEPS SPRING MySQL Database Connection using JDBC STEPS Step1: Create maven project Group id: com.cdac Art

Dnyaneshwar Madhewad 1 Jan 27, 2022
Amazon AppFlow Custom JDBC Connector example

Amazon AppFlow Custom JDBC Connector example This project contains source code and supporting files that implements Amazon Custom Connector SDK and re

AWS Samples 6 Oct 26, 2022
Online Quiz system - JDBC, JSP

Online-Quiz-System-in-Java Online Quiz system - JDBC, JSP Java Project based on JDBC, JSP, Java Servlet and Server Deployment Project Aim Develop web

Muhammad Asad 6 Oct 14, 2022
Hi, Spring fans! In this installment, we'll look at how to build tenancy-aware JDBC applications

Multitenant JDBC You'll need to spin up two separate PostgreSQL instances. Put this script into a file called postgres.sh: #!/usr/bin/env bash NAME=${

Spring Tips 19 Nov 7, 2022
esProc SPL is a scripting language for data processing, with well-designed rich library functions and powerful syntax, which can be executed in a Java program through JDBC interface and computing independently.

esProc esProc is the unique name for esProc SPL package. esProc SPL is an open-source programming language for data processing, which can perform comp

null 990 Dec 27, 2022
A JDBC driver for Cloudflare's D1 product, compatible with Jetbrains tools.

D1 JDBC Driver A JDBC driver for Cloudflare's D1 Database product! JDBC is the technology that drives popular database tools such as Jetbrains' databa

Isaac McFadyen 21 Dec 9, 2022
ClickHouse AST Parser & Visitor

ClickHouse AST Parser, it is much more than a parser. It is a convenient toolbox that provides services related to ClickHouse AST.

Jiaming Mai 66 Nov 10, 2022
Clickhouse storage backend for Janusgraph

Clickhouse storage backend for Janusgraph Overview Clickhouse implementation of Janusgraph storage backend. Features New version 0.6.1 of JanusGraph c

null 3 Nov 30, 2022
jQuery-like cross-driver interface in Java for Selenium WebDriver

seleniumQuery Feature-rich jQuery-like Java interface for Selenium WebDriver seleniumQuery is a feature-rich cross-driver Java library that brings a j

null 69 Nov 27, 2022
Automated driver management for Selenium WebDriver

WebDriverManager is a library which allows to automate the management of the drivers (e.g. chromedriver, geckodriver, etc.) required by Selenium WebDr

Boni García 2.2k Dec 30, 2022
R2DBC Driver for Oracle Database

About Oracle R2DBC The Oracle R2DBC Driver is a Java library that supports reactive programming with Oracle Database. Oracle R2DBC implements the R2DB

Oracle 159 Dec 13, 2022