Fast and reliable message broker built on top of Kafka.

Overview
Comments
  • Help required to pass external configuration of zookeeper  / kafka zk to Hermes management

    Help required to pass external configuration of zookeeper / kafka zk to Hermes management

    I am trying to run hermes management standalone on an ec2 instance using the sh script. However, i am not able to set up the management instance to talk zookeeper running on different host. Configuration : export HERMES_MANAGEMENT_OPTS="-Dstorage.connectionString=zk-metadata-instance -Dkafka.clusters.connectionString=kafka-zk-instance"

    when I run management after above configuration it first tries to connect to zk-metadata-instance and then tries to connect to localhost:2181. I think it is not able to pickup the kafka-zk-instace. Please correct me the if above is not correct.

    opened by hikrrish 25
  • Fetch topic schema eagerly

    Fetch topic schema eagerly

    We should consider fetching topic schema eagerly. Right now, every first request to validated topic ends with a 408 Request timeout response:

    {
      "message": "Async timeout, cause: unknown",
      "code": "TIMEOUT"
    }
    
    2015-08-31 09:30:26.260 ERROR p.a.t.h.f.publishing.HttpResponder - Async timeout, cause: unknown, publishing on topic g1.t, remote host 0:0:0:0:0:0:0:1, message state PARSED
    

    This applies to zookeeper-based schema repository as well.

    enhancement 
    opened by dankraw 18
  • 312 message filtering

    312 message filtering

    Initial version of message filtering for hermes-consumers.

    Scope:

    • registering subscription specific filters
    • registering global filters
    • registering custom filters
    • monitoring filtered events

    Notable features:

    • support for AVRO (via avpath)
    • support for JSON (via jsonpath)
    • filter chaining

    Choosing appropriate filter: This mainly concerns message content type. Filtering is done before any conversion takes place so all messages have the same content type as topic on which they were published. Not every filter has to dig into message content though.

    | topic content-type | filter type | | --- | --- | | avro | avropath | | json | jsonpath |

    Feature activation: Filtering component has to be enabled by setting: consumer.filtering.enabled = true

    Setting this to false will disable both global and subscription specific filters.

    Order of filtering: Filters run in the order that they are declared. Global filters will run before subscription specific.

    Global filters: Are enabled on application bootstrap and will run for every message for every subscription. Global filters cannot be enabled/disabled or inspected by subscribers. Global filters cannot be viewed via REST api.

    Subscription configuration:

    {  "name": "mySubscription", 
       "endpoint": "http://my-service", 
       "supportTeam": "My Team",
       "filters": [
          {"type": "jsonpath", path: "$.user.name", matcher: "^abc.*"},
          {"type": "jsonpath", path: "$.offer.type", matcher: "new"}  
       ]
    }
    

    Subscription filter model: Filters don't have to conform to any schema. We only require type field to be declared. Rest of properties are filter specific (means each filter specification can have different set of properties).

    Docs: in progress

    CAUTION: this will be squashed soon

    opened by wojtkiewicz 16
  • Timeout while posting http message

    Timeout while posting http message

    I am getting async time out while posting a message to a topic

    Exact error message : { "message": "Async timeout, cause: unknown", "code": "TIMEOUT" }

    Headers Content-Length →60 Content-Type →application/json;charset=ISO-8859-1 Date →Thu, 17 Mar 2016 21:46:38 GMT Hermes-Message-Id →9ca198b8-29e5-4123-8781-34cc787bf188

    Frontend server log

    2016-03-17 21:46:38.453 ERROR p.a.t.h.f.publishing.HttpResponder - Async timeout, cause: unknown, publishing on topic com.krishna.cash, remote host 10.81.90.115, message state SENDING_TO_KAFKA_PRODUCER_QUEUE 2016-03-17 21:46:38.571 ERROR p.a.t.h.f.publishing.HttpResponder - Async timeout, cause: unknown, publishing on topic com.krishna.cash, remote host 10.81.90.115, message state SENDING_TO_KAFKA_PRODUCER_QUEUE

    I have tested manually my kafka cluster and its healthy

    As soon as I create a topic , I can see a directory is created in kafka as shown below and Hermes management is working fine

    drwx------ 2 kafka kafka 4096 Mar 17 21:39 com.krishna.cash-1 drwx------ 2 kafka kafka 4096 Mar 17 21:39 com.krishna.cash-4 drwx------ 2 kafka kafka 4096 Mar 17 21:39 com.krishna.cash-7 drwx------ 2 kafka kafka 4096 Mar 17 21:37 com.krishna.cash-1 drwx------ 2 kafka kafka 4096 Mar 17 20:52 com.krishna.cash-4 drwx------ 2 kafka kafka 4096 Mar 17 20:52 com.krishna.cash-7

    Note: my kafka-zookeeper and hermes zookeeper are separate clusters

    opened by hikrrish 15
  • Broker seems to be down

    Broker seems to be down

    Hi, I have some problems during hermes installation on my ubuntu 14.04.02 which works natively. After following Your tutorial I have created group and topic, then subscription for this topic. But unfortunately I cannot publish any message and still receives {"message":"Broker seems to be down, cause: Failed to update metadata after 500 ms.","code":"INTERNAL_ERROR"}. Inspecting logs with docker-compose logs shows that one exception is still being thrown

    consumers_1  | 2015-07-28 14:38:25.878 WARN  kafka.client.ClientUtils$ - Fetching topic metadata with correlation id 54 for topics [Set(G1.T1)] from broker [id:9092,host:192.168.24.169,port:9092] failed
    consumers_1  | java.nio.channels.ClosedChannelException: null
    consumers_1  |  at kafka.network.BlockingChannel.send(BlockingChannel.scala:100) ~[kafka_2.10-0.8.2.0.jar:na]
    consumers_1  |  at kafka.producer.SyncProducer.liftedTree1$1(SyncProducer.scala:73) ~[kafka_2.10-0.8.2.0.jar:na]
    consumers_1  |  at kafka.producer.SyncProducer.kafka$producer$SyncProducer$$doSend(SyncProducer.scala:72) ~[kafka_2.10-0.8.2.0.jar:na]
    consumers_1  |  at kafka.producer.SyncProducer.send(SyncProducer.scala:113) ~[kafka_2.10-0.8.2.0.jar:na]
    consumers_1  |  at kafka.client.ClientUtils$.fetchTopicMetadata(ClientUtils.scala:58) [kafka_2.10-0.8.2.0.jar:na]
    consumers_1  |  at kafka.client.ClientUtils$.fetchTopicMetadata(ClientUtils.scala:93) [kafka_2.10-0.8.2.0.jar:na]
    consumers_1  |  at kafka.consumer.ConsumerFetcherManager$LeaderFinderThread.doWork(ConsumerFetcherManager.scala:66) [kafka_2.10-0.8.2.0.jar:na]
    consumers_1  |  at kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:60) [kafka_2.10-0.8.2.0.jar:na]
    

    Few days ago I've found here similar issue https://github.com/allegro/hermes/issues/33 but this solution didn't worke for me. I've checked many possible values for KAFKA_ADVERTISED_HOST_NAME : eth0 from ifconfig, docker0 form ifconfig, localhost from ifconfig but issue remains. Before each modification of hermes settings I cleared whole docker environment to check if applied changes works.

    To provide You more informations about my case I'm attaching here logs docker_kafka_1

    [2015-07-28 14:10:48,428] INFO Verifying properties (kafka.utils.VerifiableProperties)
    [2015-07-28 14:10:48,452] INFO Property advertised.host.name is overridden to 192.168.24.169 (kafka.utils.VerifiableProperties)
    [2015-07-28 14:10:48,452] INFO Property advertised.port is overridden to 9092 (kafka.utils.VerifiableProperties)
    [2015-07-28 14:10:48,453] INFO Property broker.id is overridden to 9092 (kafka.utils.VerifiableProperties)
    [2015-07-28 14:10:48,453] INFO Property log.cleaner.enable is overridden to false (kafka.utils.VerifiableProperties)
    [2015-07-28 14:10:48,453] INFO Property log.dirs is overridden to /kafka/kafka-logs-9092 (kafka.utils.VerifiableProperties)
    [2015-07-28 14:10:48,453] INFO Property log.retention.check.interval.ms is overridden to 300000 (kafka.utils.VerifiableProperties)
    [2015-07-28 14:10:48,453] INFO Property log.retention.hours is overridden to 168 (kafka.utils.VerifiableProperties)
    [2015-07-28 14:10:48,453] INFO Property log.segment.bytes is overridden to 1073741824 (kafka.utils.VerifiableProperties)
    [2015-07-28 14:10:48,453] INFO Property num.io.threads is overridden to 8 (kafka.utils.VerifiableProperties)
    [2015-07-28 14:10:48,453] INFO Property num.network.threads is overridden to 3 (kafka.utils.VerifiableProperties)
    [2015-07-28 14:10:48,454] INFO Property num.partitions is overridden to 1 (kafka.utils.VerifiableProperties)
    [2015-07-28 14:10:48,454] INFO Property num.recovery.threads.per.data.dir is overridden to 1 (kafka.utils.VerifiableProperties)
    [2015-07-28 14:10:48,454] INFO Property port is overridden to 9092 (kafka.utils.VerifiableProperties)
    [2015-07-28 14:10:48,454] INFO Property socket.receive.buffer.bytes is overridden to 102400 (kafka.utils.VerifiableProperties)
    [2015-07-28 14:10:48,454] INFO Property socket.request.max.bytes is overridden to 104857600 (kafka.utils.VerifiableProperties)
    [2015-07-28 14:10:48,454] INFO Property socket.send.buffer.bytes is overridden to 102400 (kafka.utils.VerifiableProperties)
    [2015-07-28 14:10:48,454] INFO Property zookeeper.connect is overridden to 172.17.0.6:2181 (kafka.utils.VerifiableProperties)
    [2015-07-28 14:10:48,454] INFO Property zookeeper.connection.timeout.ms is overridden to 6000 (kafka.utils.VerifiableProperties)
    [2015-07-28 14:10:48,482] INFO [Kafka Server 9092], starting (kafka.server.KafkaServer)
    [2015-07-28 14:10:48,483] INFO [Kafka Server 9092], Connecting to zookeeper on 172.17.0.6:2181 (kafka.server.KafkaServer)
    [2015-07-28 14:10:48,490] INFO Starting ZkClient event thread. (org.I0Itec.zkclient.ZkEventThread)
    [2015-07-28 14:10:48,495] INFO Client environment:zookeeper.version=3.4.6-1569965, built on 02/20/2014 09:09 GMT (org.apache.zookeeper.ZooKeeper)
    [2015-07-28 14:10:48,495] INFO Client environment:host.name=c2dc4a71f97b (org.apache.zookeeper.ZooKeeper)
    [2015-07-28 14:10:48,495] INFO Client environment:java.version=1.6.0_34 (org.apache.zookeeper.ZooKeeper)
    [2015-07-28 14:10:48,495] INFO Client environment:java.vendor=Sun Microsystems Inc. (org.apache.zookeeper.ZooKeeper)
    [2015-07-28 14:10:48,495] INFO Client environment:java.home=/usr/lib/jvm/java-6-openjdk-amd64/jre (org.apache.zookeeper.ZooKeeper)
    [2015-07-28 14:10:48,495] INFO Client environment:java.class.path=:/opt/kafka_2.10-0.8.2.0/bin/../core/build/dependant-libs-2.10.4*/*.jar:/opt/kafka_2.10-0.8.2.0/bin/../examples/build/libs//kafka-examples*.jar:/opt/kafka_2.10-0.8.2.0/bin/../contrib/hadoop-consumer/build/libs//kafka-hadoop-consumer*.jar:/opt/kafka_2.10-0.8.2.0/bin/../contrib/hadoop-producer/build/libs//kafka-hadoop-producer*.jar:/opt/kafka_2.10-0.8.2.0/bin/../clients/build/libs/kafka-clients*.jar:/opt/kafka_2.10-0.8.2.0/bin/../libs/jopt-simple-3.2.jar:/opt/kafka_2.10-0.8.2.0/bin/../libs/kafka-clients-0.8.2.0.jar:/opt/kafka_2.10-0.8.2.0/bin/../libs/kafka_2.10-0.8.2.0-javadoc.jar:/opt/kafka_2.10-0.8.2.0/bin/../libs/kafka_2.10-0.8.2.0-scaladoc.jar:/opt/kafka_2.10-0.8.2.0/bin/../libs/kafka_2.10-0.8.2.0-sources.jar:/opt/kafka_2.10-0.8.2.0/bin/../libs/kafka_2.10-0.8.2.0-test.jar:/opt/kafka_2.10-0.8.2.0/bin/../libs/kafka_2.10-0.8.2.0.jar:/opt/kafka_2.10-0.8.2.0/bin/../libs/log4j-1.2.16.jar:/opt/kafka_2.10-0.8.2.0/bin/../libs/lz4-1.2.0.jar:/opt/kafka_2.10-0.8.2.0/bin/../libs/metrics-core-2.2.0.jar:/opt/kafka_2.10-0.8.2.0/bin/../libs/scala-library-2.10.4.jar:/opt/kafka_2.10-0.8.2.0/bin/../libs/slf4j-api-1.7.6.jar:/opt/kafka_2.10-0.8.2.0/bin/../libs/slf4j-log4j12-1.6.1.jar:/opt/kafka_2.10-0.8.2.0/bin/../libs/snappy-java-1.1.1.6.jar:/opt/kafka_2.10-0.8.2.0/bin/../libs/zkclient-0.3.jar:/opt/kafka_2.10-0.8.2.0/bin/../libs/zookeeper-3.4.6.jar:/opt/kafka_2.10-0.8.2.0/bin/../core/build/libs/kafka_2.10*.jar (org.apache.zookeeper.ZooKeeper)
    [2015-07-28 14:10:48,495] INFO Client environment:java.library.path=/usr/lib/jvm/java-6-openjdk-amd64/jre/lib/amd64/server:/usr/lib/jvm/java-6-openjdk-amd64/jre/lib/amd64:/usr/lib/jvm/java-6-openjdk-amd64/jre/../lib/amd64:/usr/java/packages/lib/amd64:/usr/lib/x86_64-linux-gnu/jni:/lib/x86_64-linux-gnu:/usr/lib/x86_64-linux-gnu:/usr/lib/jni:/lib:/usr/lib (org.apache.zookeeper.ZooKeeper)
    [2015-07-28 14:10:48,495] INFO Client environment:java.io.tmpdir=/tmp (org.apache.zookeeper.ZooKeeper)
    [2015-07-28 14:10:48,495] INFO Client environment:java.compiler=<NA> (org.apache.zookeeper.ZooKeeper)
    [2015-07-28 14:10:48,495] INFO Client environment:os.name=Linux (org.apache.zookeeper.ZooKeeper)
    [2015-07-28 14:10:48,495] INFO Client environment:os.arch=amd64 (org.apache.zookeeper.ZooKeeper)
    [2015-07-28 14:10:48,495] INFO Client environment:os.version=3.16.0-45-generic (org.apache.zookeeper.ZooKeeper)
    [2015-07-28 14:10:48,495] INFO Client environment:user.name=root (org.apache.zookeeper.ZooKeeper)
    [2015-07-28 14:10:48,495] INFO Client environment:user.home=/root (org.apache.zookeeper.ZooKeeper)
    [2015-07-28 14:10:48,495] INFO Client environment:user.dir=/ (org.apache.zookeeper.ZooKeeper)
    [2015-07-28 14:10:48,496] INFO Initiating client connection, connectString=172.17.0.6:2181 sessionTimeout=6000 watcher=org.I0Itec.zkclient.ZkClient@4b704006 (org.apache.zookeeper.ZooKeeper)
    [2015-07-28 14:10:48,515] INFO Opening socket connection to server 172.17.0.6/172.17.0.6:2181. Will not attempt to authenticate using SASL (unknown error) (org.apache.zookeeper.ClientCnxn)
    [2015-07-28 14:10:48,519] INFO Socket connection established to 172.17.0.6/172.17.0.6:2181, initiating session (org.apache.zookeeper.ClientCnxn)
    [2015-07-28 14:10:48,540] INFO Session establishment complete on server 172.17.0.6/172.17.0.6:2181, sessionid = 0x14ed5005e750000, negotiated timeout = 6000 (org.apache.zookeeper.ClientCnxn)
    [2015-07-28 14:10:48,541] INFO zookeeper state changed (SyncConnected) (org.I0Itec.zkclient.ZkClient)
    [2015-07-28 14:10:48,677] INFO Log directory '/kafka/kafka-logs-9092' not found, creating it. (kafka.log.LogManager)
    [2015-07-28 14:10:48,685] INFO Loading logs. (kafka.log.LogManager)
    [2015-07-28 14:10:48,690] INFO Logs loading complete. (kafka.log.LogManager)
    [2015-07-28 14:10:48,691] INFO Starting log cleanup with a period of 300000 ms. (kafka.log.LogManager)
    [2015-07-28 14:10:48,694] INFO Starting log flusher with a default period of 9223372036854775807 ms. (kafka.log.LogManager)
    [2015-07-28 14:10:48,732] INFO Awaiting socket connections on 0.0.0.0:9092. (kafka.network.Acceptor)
    [2015-07-28 14:10:48,733] INFO [Socket Server on Broker 9092], Started (kafka.network.SocketServer)
    [2015-07-28 14:10:48,819] INFO Will not load MX4J, mx4j-tools.jar is not in the classpath (kafka.utils.Mx4jLoader$)
    [2015-07-28 14:10:48,904] INFO conflict in /brokers/ids/9092 data: {"jmx_port":-1,"timestamp":"1438092648877","host":"192.168.24.169","version":1,"port":9092} stored data: {"jmx_port":-1,"timestamp":"1438082684810","host":"192.168.24.169","version":1,"port":9092} (kafka.utils.ZkUtils$)
    [2015-07-28 14:10:48,928] INFO I wrote this conflicted ephemeral node [{"jmx_port":-1,"timestamp":"1438092648877","host":"192.168.24.169","version":1,"port":9092}] at /brokers/ids/9092 a while back in a different session, hence I will backoff for this node to be deleted by Zookeeper and retry (kafka.utils.ZkUtils$)
    [2015-07-28 14:10:55,087] INFO conflict in /brokers/ids/9092 data: {"jmx_port":-1,"timestamp":"1438092648877","host":"192.168.24.169","version":1,"port":9092} stored data: {"jmx_port":-1,"timestamp":"1438082684810","host":"192.168.24.169","version":1,"port":9092} (kafka.utils.ZkUtils$)
    [2015-07-28 14:10:55,113] INFO I wrote this conflicted ephemeral node [{"jmx_port":-1,"timestamp":"1438092648877","host":"192.168.24.169","version":1,"port":9092}] at /brokers/ids/9092 a while back in a different session, hence I will backoff for this node to be deleted by Zookeeper and retry (kafka.utils.ZkUtils$)
    [2015-07-28 14:10:56,827] ERROR Error handling event ZkEvent[Data of /controller changed sent to kafka.server.ZookeeperLeaderElector$LeaderChangeListener@39fc58f3] (org.I0Itec.zkclient.ZkEventThread)
    java.lang.IllegalStateException: Kafka scheduler has not been started
        at kafka.utils.KafkaScheduler.ensureStarted(KafkaScheduler.scala:114)
        at kafka.utils.KafkaScheduler.shutdown(KafkaScheduler.scala:86)
        at kafka.controller.KafkaController.onControllerResignation(KafkaController.scala:350)
        at kafka.controller.KafkaController$$anonfun$2.apply$mcV$sp(KafkaController.scala:162)
        at kafka.server.ZookeeperLeaderElector$LeaderChangeListener$$anonfun$handleDataDeleted$1.apply$mcZ$sp(ZookeeperLeaderElector.scala:138)
        at kafka.server.ZookeeperLeaderElector$LeaderChangeListener$$anonfun$handleDataDeleted$1.apply(ZookeeperLeaderElector.scala:134)
        at kafka.server.ZookeeperLeaderElector$LeaderChangeListener$$anonfun$handleDataDeleted$1.apply(ZookeeperLeaderElector.scala:134)
        at kafka.utils.Utils$.inLock(Utils.scala:535)
        at kafka.server.ZookeeperLeaderElector$LeaderChangeListener.handleDataDeleted(ZookeeperLeaderElector.scala:134)
        at org.I0Itec.zkclient.ZkClient$6.run(ZkClient.java:549)
        at org.I0Itec.zkclient.ZkEventThread.run(ZkEventThread.java:71)
    [2015-07-28 14:11:01,487] INFO Registered broker 9092 at path /brokers/ids/9092 with address 192.168.24.169:9092. (kafka.utils.ZkUtils$)
    [2015-07-28 14:11:01,751] INFO [Kafka Server 9092], started (kafka.server.KafkaServer)
    

    docker_consumers_1

    [2015-07-28 14:10:48,428] INFO Verifying properties (kafka.utils.VerifiableProperties)
    [2015-07-28 14:10:48,452] INFO Property advertised.host.name is overridden to 192.168.24.169 (kafka.utils.VerifiableProperties)
    [2015-07-28 14:10:48,452] INFO Property advertised.port is overridden to 9092 (kafka.utils.VerifiableProperties)
    [2015-07-28 14:10:48,453] INFO Property broker.id is overridden to 9092 (kafka.utils.VerifiableProperties)
    [2015-07-28 14:10:48,453] INFO Property log.cleaner.enable is overridden to false (kafka.utils.VerifiableProperties)
    [2015-07-28 14:10:48,453] INFO Property log.dirs is overridden to /kafka/kafka-logs-9092 (kafka.utils.VerifiableProperties)
    [2015-07-28 14:10:48,453] INFO Property log.retention.check.interval.ms is overridden to 300000 (kafka.utils.VerifiableProperties)
    [2015-07-28 14:10:48,453] INFO Property log.retention.hours is overridden to 168 (kafka.utils.VerifiableProperties)
    [2015-07-28 14:10:48,453] INFO Property log.segment.bytes is overridden to 1073741824 (kafka.utils.VerifiableProperties)
    [2015-07-28 14:10:48,453] INFO Property num.io.threads is overridden to 8 (kafka.utils.VerifiableProperties)
    [2015-07-28 14:10:48,453] INFO Property num.network.threads is overridden to 3 (kafka.utils.VerifiableProperties)
    [2015-07-28 14:10:48,454] INFO Property num.partitions is overridden to 1 (kafka.utils.VerifiableProperties)
    [2015-07-28 14:10:48,454] INFO Property num.recovery.threads.per.data.dir is overridden to 1 (kafka.utils.VerifiableProperties)
    [2015-07-28 14:10:48,454] INFO Property port is overridden to 9092 (kafka.utils.VerifiableProperties)
    [2015-07-28 14:10:48,454] INFO Property socket.receive.buffer.bytes is overridden to 102400 (kafka.utils.VerifiableProperties)
    [2015-07-28 14:10:48,454] INFO Property socket.request.max.bytes is overridden to 104857600 (kafka.utils.VerifiableProperties)
    [2015-07-28 14:10:48,454] INFO Property socket.send.buffer.bytes is overridden to 102400 (kafka.utils.VerifiableProperties)
    [2015-07-28 14:10:48,454] INFO Property zookeeper.connect is overridden to 172.17.0.6:2181 (kafka.utils.VerifiableProperties)
    [2015-07-28 14:10:48,454] INFO Property zookeeper.connection.timeout.ms is overridden to 6000 (kafka.utils.VerifiableProperties)
    [2015-07-28 14:10:48,482] INFO [Kafka Server 9092], starting (kafka.server.KafkaServer)
    [2015-07-28 14:10:48,483] INFO [Kafka Server 9092], Connecting to zookeeper on 172.17.0.6:2181 (kafka.server.KafkaServer)
    [2015-07-28 14:10:48,490] INFO Starting ZkClient event thread. (org.I0Itec.zkclient.ZkEventThread)
    [2015-07-28 14:10:48,495] INFO Client environment:zookeeper.version=3.4.6-1569965, built on 02/20/2014 09:09 GMT (org.apache.zookeeper.ZooKeeper)
    [2015-07-28 14:10:48,495] INFO Client environment:host.name=c2dc4a71f97b (org.apache.zookeeper.ZooKeeper)
    [2015-07-28 14:10:48,495] INFO Client environment:java.version=1.6.0_34 (org.apache.zookeeper.ZooKeeper)
    [2015-07-28 14:10:48,495] INFO Client environment:java.vendor=Sun Microsystems Inc. (org.apache.zookeeper.ZooKeeper)
    [2015-07-28 14:10:48,495] INFO Client environment:java.home=/usr/lib/jvm/java-6-openjdk-amd64/jre (org.apache.zookeeper.ZooKeeper)
    [2015-07-28 14:10:48,495] INFO Client environment:java.class.path=:/opt/kafka_2.10-0.8.2.0/bin/../core/build/dependant-libs-2.10.4*/*.jar:/opt/kafka_2.10-0.8.2.0/bin/../examples/build/libs//kafka-examples*.jar:/opt/kafka_2.10-0.8.2.0/bin/../contrib/hadoop-consumer/build/libs//kafka-hadoop-consumer*.jar:/opt/kafka_2.10-0.8.2.0/bin/../contrib/hadoop-producer/build/libs//kafka-hadoop-producer*.jar:/opt/kafka_2.10-0.8.2.0/bin/../clients/build/libs/kafka-clients*.jar:/opt/kafka_2.10-0.8.2.0/bin/../libs/jopt-simple-3.2.jar:/opt/kafka_2.10-0.8.2.0/bin/../libs/kafka-clients-0.8.2.0.jar:/opt/kafka_2.10-0.8.2.0/bin/../libs/kafka_2.10-0.8.2.0-javadoc.jar:/opt/kafka_2.10-0.8.2.0/bin/../libs/kafka_2.10-0.8.2.0-scaladoc.jar:/opt/kafka_2.10-0.8.2.0/bin/../libs/kafka_2.10-0.8.2.0-sources.jar:/opt/kafka_2.10-0.8.2.0/bin/../libs/kafka_2.10-0.8.2.0-test.jar:/opt/kafka_2.10-0.8.2.0/bin/../libs/kafka_2.10-0.8.2.0.jar:/opt/kafka_2.10-0.8.2.0/bin/../libs/log4j-1.2.16.jar:/opt/kafka_2.10-0.8.2.0/bin/../libs/lz4-1.2.0.jar:/opt/kafka_2.10-0.8.2.0/bin/../libs/metrics-core-2.2.0.jar:/opt/kafka_2.10-0.8.2.0/bin/../libs/scala-library-2.10.4.jar:/opt/kafka_2.10-0.8.2.0/bin/../libs/slf4j-api-1.7.6.jar:/opt/kafka_2.10-0.8.2.0/bin/../libs/slf4j-log4j12-1.6.1.jar:/opt/kafka_2.10-0.8.2.0/bin/../libs/snappy-java-1.1.1.6.jar:/opt/kafka_2.10-0.8.2.0/bin/../libs/zkclient-0.3.jar:/opt/kafka_2.10-0.8.2.0/bin/../libs/zookeeper-3.4.6.jar:/opt/kafka_2.10-0.8.2.0/bin/../core/build/libs/kafka_2.10*.jar (org.apache.zookeeper.ZooKeeper)
    [2015-07-28 14:10:48,495] INFO Client environment:java.library.path=/usr/lib/jvm/java-6-openjdk-amd64/jre/lib/amd64/server:/usr/lib/jvm/java-6-openjdk-amd64/jre/lib/amd64:/usr/lib/jvm/java-6-openjdk-amd64/jre/../lib/amd64:/usr/java/packages/lib/amd64:/usr/lib/x86_64-linux-gnu/jni:/lib/x86_64-linux-gnu:/usr/lib/x86_64-linux-gnu:/usr/lib/jni:/lib:/usr/lib (org.apache.zookeeper.ZooKeeper)
    [2015-07-28 14:10:48,495] INFO Client environment:java.io.tmpdir=/tmp (org.apache.zookeeper.ZooKeeper)
    [2015-07-28 14:10:48,495] INFO Client environment:java.compiler=<NA> (org.apache.zookeeper.ZooKeeper)
    [2015-07-28 14:10:48,495] INFO Client environment:os.name=Linux (org.apache.zookeeper.ZooKeeper)
    [2015-07-28 14:10:48,495] INFO Client environment:os.arch=amd64 (org.apache.zookeeper.ZooKeeper)
    [2015-07-28 14:10:48,495] INFO Client environment:os.version=3.16.0-45-generic (org.apache.zookeeper.ZooKeeper)
    [2015-07-28 14:10:48,495] INFO Client environment:user.name=root (org.apache.zookeeper.ZooKeeper)
    [2015-07-28 14:10:48,495] INFO Client environment:user.home=/root (org.apache.zookeeper.ZooKeeper)
    [2015-07-28 14:10:48,495] INFO Client environment:user.dir=/ (org.apache.zookeeper.ZooKeeper)
    [2015-07-28 14:10:48,496] INFO Initiating client connection, connectString=172.17.0.6:2181 sessionTimeout=6000 watcher=org.I0Itec.zkclient.ZkClient@4b704006 (org.apache.zookeeper.ZooKeeper)
    [2015-07-28 14:10:48,515] INFO Opening socket connection to server 172.17.0.6/172.17.0.6:2181. Will not attempt to authenticate using SASL (unknown error) (org.apache.zookeeper.ClientCnxn)
    [2015-07-28 14:10:48,519] INFO Socket connection established to 172.17.0.6/172.17.0.6:2181, initiating session (org.apache.zookeeper.ClientCnxn)
    [2015-07-28 14:10:48,540] INFO Session establishment complete on server 172.17.0.6/172.17.0.6:2181, sessionid = 0x14ed5005e750000, negotiated timeout = 6000 (org.apache.zookeeper.ClientCnxn)
    [2015-07-28 14:10:48,541] INFO zookeeper state changed (SyncConnected) (org.I0Itec.zkclient.ZkClient)
    [2015-07-28 14:10:48,677] INFO Log directory '/kafka/kafka-logs-9092' not found, creating it. (kafka.log.LogManager)
    [2015-07-28 14:10:48,685] INFO Loading logs. (kafka.log.LogManager)
    [2015-07-28 14:10:48,690] INFO Logs loading complete. (kafka.log.LogManager)
    [2015-07-28 14:10:48,691] INFO Starting log cleanup with a period of 300000 ms. (kafka.log.LogManager)
    [2015-07-28 14:10:48,694] INFO Starting log flusher with a default period of 9223372036854775807 ms. (kafka.log.LogManager)
    [2015-07-28 14:10:48,732] INFO Awaiting socket connections on 0.0.0.0:9092. (kafka.network.Acceptor)
    [2015-07-28 14:10:48,733] INFO [Socket Server on Broker 9092], Started (kafka.network.SocketServer)
    [2015-07-28 14:10:48,819] INFO Will not load MX4J, mx4j-tools.jar is not in the classpath (kafka.utils.Mx4jLoader$)
    [2015-07-28 14:10:48,904] INFO conflict in /brokers/ids/9092 data: {"jmx_port":-1,"timestamp":"1438092648877","host":"192.168.24.169","version":1,"port":9092} stored data: {"jmx_port":-1,"timestamp":"1438082684810","host":"192.168.24.169","version":1,"port":9092} (kafka.utils.ZkUtils$)
    [2015-07-28 14:10:48,928] INFO I wrote this conflicted ephemeral node [{"jmx_port":-1,"timestamp":"1438092648877","host":"192.168.24.169","version":1,"port":9092}] at /brokers/ids/9092 a while back in a different session, hence I will backoff for this node to be deleted by Zookeeper and retry (kafka.utils.ZkUtils$)
    [2015-07-28 14:10:55,087] INFO conflict in /brokers/ids/9092 data: {"jmx_port":-1,"timestamp":"1438092648877","host":"192.168.24.169","version":1,"port":9092} stored data: {"jmx_port":-1,"timestamp":"1438082684810","host":"192.168.24.169","version":1,"port":9092} (kafka.utils.ZkUtils$)
    [2015-07-28 14:10:55,113] INFO I wrote this conflicted ephemeral node [{"jmx_port":-1,"timestamp":"1438092648877","host":"192.168.24.169","version":1,"port":9092}] at /brokers/ids/9092 a while back in a different session, hence I will backoff for this node to be deleted by Zookeeper and retry (kafka.utils.ZkUtils$)
    [2015-07-28 14:10:56,827] ERROR Error handling event ZkEvent[Data of /controller changed sent to kafka.server.ZookeeperLeaderElector$LeaderChangeListener@39fc58f3] (org.I0Itec.zkclient.ZkEventThread)
    java.lang.IllegalStateException: Kafka scheduler has not been started
        at kafka.utils.KafkaScheduler.ensureStarted(KafkaScheduler.scala:114)
        at kafka.utils.KafkaScheduler.shutdown(KafkaScheduler.scala:86)
        at kafka.controller.KafkaController.onControllerResignation(KafkaController.scala:350)
        at kafka.controller.KafkaController$$anonfun$2.apply$mcV$sp(KafkaController.scala:162)
        at kafka.server.ZookeeperLeaderElector$LeaderChangeListener$$anonfun$handleDataDeleted$1.apply$mcZ$sp(ZookeeperLeaderElector.scala:138)
        at kafka.server.ZookeeperLeaderElector$LeaderChangeListener$$anonfun$handleDataDeleted$1.apply(ZookeeperLeaderElector.scala:134)
        at kafka.server.ZookeeperLeaderElector$LeaderChangeListener$$anonfun$handleDataDeleted$1.apply(ZookeeperLeaderElector.scala:134)
        at kafka.utils.Utils$.inLock(Utils.scala:535)
        at kafka.server.ZookeeperLeaderElector$LeaderChangeListener.handleDataDeleted(ZookeeperLeaderElector.scala:134)
        at org.I0Itec.zkclient.ZkClient$6.run(ZkClient.java:549)
        at org.I0Itec.zkclient.ZkEventThread.run(ZkEventThread.java:71)
    [2015-07-28 14:11:01,487] INFO Registered broker 9092 at path /brokers/ids/9092 with address 192.168.24.169:9092. (kafka.utils.ZkUtils$)
    [2015-07-28 14:11:01,751] INFO [Kafka Server 9092], started (kafka.server.KafkaServer)
    ppiorkowski@ppiorkowski:~/Development/brain/tools$ docker logs  docker_consumers_1 
    2015-07-28 14:10:48.921 INFO  c.n.c.sources.URLConfigurationSource - URLs to be used as dynamic configuration source: [file:/etc/hermes/consumers.properties]
    2015-07-28 14:10:48.955 INFO  c.n.config.DynamicPropertyFactory - DynamicPropertyFactory is initialized with configuration sources: com.netflix.config.ConcurrentCompositeConfiguration@402e37bc
    2015-07-28 14:10:49.013 INFO  o.a.c.f.imps.CuratorFrameworkImpl - Starting
    2015-07-28 14:10:49.093 INFO  o.a.c.f.state.ConnectionStateManager - State change: CONNECTED
    2015-07-28 14:10:49.384 INFO  o.a.c.f.imps.CuratorFrameworkImpl - Starting
    2015-07-28 14:10:49.390 INFO  o.a.c.f.state.ConnectionStateManager - State change: CONNECTED
    2015-07-28 14:10:49.451 INFO  p.a.t.h.c.cache.zookeeper.NodeCache - Got entry change event for path /run/hermes/groups/G1
    2015-07-28 14:10:49.462 INFO  p.a.t.h.c.cache.zookeeper.NodeCache - Got entry change event for path /run/hermes/groups/G1/topics/T1
    2015-07-28 14:10:49.579 INFO  p.a.t.h.c.s.c.z.SubscriptionsNodeCache - Got subscription change event for path /run/hermes/groups/G1/topics/T1/subscriptions/S1 type CHILD_ADDED
    2015-07-28 14:10:49.877 INFO  org.I0Itec.zkclient.ZkEventThread - Starting ZkClient event thread.
    2015-07-28 14:10:50.290 INFO  org.I0Itec.zkclient.ZkClient - zookeeper state changed (SyncConnected)
    2015-07-28 14:11:13.587 WARN  kafka.client.ClientUtils$ - Fetching topic metadata with correlation id 0 for topics [Set(G1.T1)] from broker [id:9092,host:192.168.24.169,port:9092] failed
    java.nio.channels.ClosedChannelException: null
        at kafka.network.BlockingChannel.send(BlockingChannel.scala:100) ~[kafka_2.10-0.8.2.0.jar:na]
        at kafka.producer.SyncProducer.liftedTree1$1(SyncProducer.scala:73) ~[kafka_2.10-0.8.2.0.jar:na]
        at kafka.producer.SyncProducer.kafka$producer$SyncProducer$$doSend(SyncProducer.scala:72) ~[kafka_2.10-0.8.2.0.jar:na]
        at kafka.producer.SyncProducer.send(SyncProducer.scala:113) ~[kafka_2.10-0.8.2.0.jar:na]
        at kafka.client.ClientUtils$.fetchTopicMetadata(ClientUtils.scala:58) [kafka_2.10-0.8.2.0.jar:na]
        at kafka.client.ClientUtils$.fetchTopicMetadata(ClientUtils.scala:93) [kafka_2.10-0.8.2.0.jar:na]
        at kafka.consumer.ConsumerFetcherManager$LeaderFinderThread.doWork(ConsumerFetcherManager.scala:66) [kafka_2.10-0.8.2.0.jar:na]
        at kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:60) [kafka_2.10-0.8.2.0.jar:na]
    2015-07-28 14:11:43.701 WARN  kafka.client.ClientUtils$ - Fetching topic metadata with correlation id 1 for topics [Set(G1.T1)] from broker [id:9092,host:192.168.24.169,port:9092] failed
    java.nio.channels.ClosedChannelException: null
        at kafka.network.BlockingChannel.send(BlockingChannel.scala:100) ~[kafka_2.10-0.8.2.0.jar:na]
        at kafka.producer.SyncProducer.liftedTree1$1(SyncProducer.scala:73) ~[kafka_2.10-0.8.2.0.jar:na]
        at kafka.producer.SyncProducer.kafka$producer$SyncProducer$$doSend(SyncProducer.scala:72) ~[kafka_2.10-0.8.2.0.jar:na]
        at kafka.producer.SyncProducer.send(SyncProducer.scala:113) ~[kafka_2.10-0.8.2.0.jar:na]
        at kafka.client.ClientUtils$.fetchTopicMetadata(ClientUtils.scala:58) [kafka_2.10-0.8.2.0.jar:na]
        at kafka.client.ClientUtils$.fetchTopicMetadata(ClientUtils.scala:93) [kafka_2.10-0.8.2.0.jar:na]
        at kafka.consumer.ConsumerFetcherManager$LeaderFinderThread.doWork(ConsumerFetcherManager.scala:66) [kafka_2.10-0.8.2.0.jar:na]
        at kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:60) [kafka_2.10-0.8.2.0.jar:na]
    2015-07-28 14:11:43.704 WARN  k.c.ConsumerFetcherManager$LeaderFinderThread - [G1_T1_S1_e0ed93125dda-1438092649872-d639c205-leader-finder-thread], Failed to find leader for Set([G1.T1,8], [G1.T1,2], [G1.T1,3], [G1.T1,0], [G1.T1,4], [G1.T1,6], [G1.T1,7], [G1.T1,1], [G1.T1,5], [G1.T1,9])
    

    docker_zookeeper_1

    JMX enabled by default
    Using config: /opt/zookeeper-3.4.6/bin/../conf/zoo.cfg
    2015-07-28 14:10:47,960 [myid:] - INFO  [main:QuorumPeerConfig@103] - Reading configuration from: /opt/zookeeper-3.4.6/bin/../conf/zoo.cfg
    2015-07-28 14:10:47,963 [myid:] - INFO  [main:DatadirCleanupManager@78] - autopurge.snapRetainCount set to 3
    2015-07-28 14:10:47,963 [myid:] - INFO  [main:DatadirCleanupManager@79] - autopurge.purgeInterval set to 1
    2015-07-28 14:10:47,964 [myid:] - WARN  [main:QuorumPeerMain@113] - Either no config or no quorum defined in config, running  in standalone mode
    2015-07-28 14:10:47,964 [myid:] - INFO  [PurgeTask:DatadirCleanupManager$PurgeTask@138] - Purge task started.
    2015-07-28 14:10:47,973 [myid:] - INFO  [PurgeTask:DatadirCleanupManager$PurgeTask@144] - Purge task completed.
    2015-07-28 14:10:47,974 [myid:] - INFO  [main:QuorumPeerConfig@103] - Reading configuration from: /opt/zookeeper-3.4.6/bin/../conf/zoo.cfg
    2015-07-28 14:10:47,974 [myid:] - INFO  [main:ZooKeeperServerMain@95] - Starting server
    2015-07-28 14:10:47,979 [myid:] - INFO  [main:Environment@100] - Server environment:zookeeper.version=3.4.6-1569965, built on 02/20/2014 09:09 GMT
    2015-07-28 14:10:47,980 [myid:] - INFO  [main:Environment@100] - Server environment:host.name=4675d9977d60
    2015-07-28 14:10:47,980 [myid:] - INFO  [main:Environment@100] - Server environment:java.version=1.7.0_65
    2015-07-28 14:10:47,980 [myid:] - INFO  [main:Environment@100] - Server environment:java.vendor=Oracle Corporation
    2015-07-28 14:10:47,980 [myid:] - INFO  [main:Environment@100] - Server environment:java.home=/usr/lib/jvm/java-7-openjdk-amd64/jre
    2015-07-28 14:10:47,980 [myid:] - INFO  [main:Environment@100] - Server environment:java.class.path=/opt/zookeeper-3.4.6/bin/../build/classes:/opt/zookeeper-3.4.6/bin/../build/lib/*.jar:/opt/zookeeper-3.4.6/bin/../lib/slf4j-log4j12-1.6.1.jar:/opt/zookeeper-3.4.6/bin/../lib/slf4j-api-1.6.1.jar:/opt/zookeeper-3.4.6/bin/../lib/netty-3.7.0.Final.jar:/opt/zookeeper-3.4.6/bin/../lib/log4j-1.2.16.jar:/opt/zookeeper-3.4.6/bin/../lib/jline-0.9.94.jar:/opt/zookeeper-3.4.6/bin/../zookeeper-3.4.6.jar:/opt/zookeeper-3.4.6/bin/../src/java/lib/*.jar:/opt/zookeeper-3.4.6/bin/../conf:
    2015-07-28 14:10:47,980 [myid:] - INFO  [main:Environment@100] - Server environment:java.library.path=/usr/java/packages/lib/amd64:/usr/lib/x86_64-linux-gnu/jni:/lib/x86_64-linux-gnu:/usr/lib/x86_64-linux-gnu:/usr/lib/jni:/lib:/usr/lib
    2015-07-28 14:10:47,980 [myid:] - INFO  [main:Environment@100] - Server environment:java.io.tmpdir=/tmp
    2015-07-28 14:10:47,982 [myid:] - INFO  [main:Environment@100] - Server environment:java.compiler=<NA>
    2015-07-28 14:10:47,982 [myid:] - INFO  [main:Environment@100] - Server environment:os.name=Linux
    2015-07-28 14:10:47,982 [myid:] - INFO  [main:Environment@100] - Server environment:os.arch=amd64
    2015-07-28 14:10:47,982 [myid:] - INFO  [main:Environment@100] - Server environment:os.version=3.16.0-45-generic
    2015-07-28 14:10:47,982 [myid:] - INFO  [main:Environment@100] - Server environment:user.name=root
    2015-07-28 14:10:47,982 [myid:] - INFO  [main:Environment@100] - Server environment:user.home=/root
    2015-07-28 14:10:47,982 [myid:] - INFO  [main:Environment@100] - Server environment:user.dir=/opt/zookeeper-3.4.6
    2015-07-28 14:10:47,983 [myid:] - INFO  [main:ZooKeeperServer@755] - tickTime set to 2000
    2015-07-28 14:10:47,983 [myid:] - INFO  [main:ZooKeeperServer@764] - minSessionTimeout set to -1
    2015-07-28 14:10:47,983 [myid:] - INFO  [main:ZooKeeperServer@773] - maxSessionTimeout set to -1
    2015-07-28 14:10:47,991 [myid:] - INFO  [main:NIOServerCnxnFactory@94] - binding to port 0.0.0.0/0.0.0.0:2181
    2015-07-28 14:10:48,520 [myid:] - INFO  [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxnFactory@197] - Accepted socket connection from /172.17.0.7:60189
    2015-07-28 14:10:48,525 [myid:] - INFO  [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:ZooKeeperServer@868] - Client attempting to establish new session at /172.17.0.7:60189
    2015-07-28 14:10:48,526 [myid:] - INFO  [SyncThread:0:FileTxnLog@199] - Creating new log file: log.1b7
    2015-07-28 14:10:48,538 [myid:] - INFO  [SyncThread:0:ZooKeeperServer@617] - Established session 0x14ed5005e750000 with negotiated timeout 6000 for client /172.17.0.7:60189
    2015-07-28 14:10:48,895 [myid:] - INFO  [ProcessThread(sid:0 cport:-1)::PrepRequestProcessor@645] - Got user-level KeeperException when processing sessionid:0x14ed5005e750000 type:create cxid:0xf zxid:0x1b8 txntype:-1 reqpath:n/a Error Path:/brokers/ids/9092 Error:KeeperErrorCode = NodeExists for /brokers/ids/9092
    2015-07-28 14:10:49,076 [myid:] - INFO  [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxnFactory@197] - Accepted socket connection from /172.17.0.8:54176
    2015-07-28 14:10:49,078 [myid:] - INFO  [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:ZooKeeperServer@868] - Client attempting to establish new session at /172.17.0.8:54176
    2015-07-28 14:10:49,086 [myid:] - INFO  [SyncThread:0:ZooKeeperServer@617] - Established session 0x14ed5005e750001 with negotiated timeout 7000 for client /172.17.0.8:54176
    2015-07-28 14:10:49,385 [myid:] - INFO  [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxnFactory@197] - Accepted socket connection from /172.17.0.8:54178
    2015-07-28 14:10:49,386 [myid:] - INFO  [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:ZooKeeperServer@868] - Client attempting to establish new session at /172.17.0.8:54178
    2015-07-28 14:10:49,389 [myid:] - INFO  [SyncThread:0:ZooKeeperServer@617] - Established session 0x14ed5005e750002 with negotiated timeout 7000 for client /172.17.0.8:54178
    2015-07-28 14:10:49,549 [myid:] - INFO  [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxnFactory@197] - Accepted socket connection from /172.17.0.9:54530
    2015-07-28 14:10:49,553 [myid:] - INFO  [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:ZooKeeperServer@868] - Client attempting to establish new session at /172.17.0.9:54530
    2015-07-28 14:10:49,556 [myid:] - INFO  [SyncThread:0:ZooKeeperServer@617] - Established session 0x14ed5005e750003 with negotiated timeout 10000 for client /172.17.0.9:54530
    2015-07-28 14:10:49,878 [myid:] - INFO  [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxnFactory@197] - Accepted socket connection from /172.17.0.8:54180
    2015-07-28 14:10:49,878 [myid:] - INFO  [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:ZooKeeperServer@868] - Client attempting to establish new session at /172.17.0.8:54180
    2015-07-28 14:10:50,290 [myid:] - INFO  [SyncThread:0:ZooKeeperServer@617] - Established session 0x14ed5005e750004 with negotiated timeout 7000 for client /172.17.0.8:54180
    2015-07-28 14:10:52,263 [myid:] - WARN  [SyncThread:0:FileTxnLog@334] - fsync-ing the write ahead log in SyncThread:0 took 1907ms which will adversely effect operation latency. See the ZooKeeper troubleshooting guide
    2015-07-28 14:10:52,805 [myid:] - INFO  [ProcessThread(sid:0 cport:-1)::PrepRequestProcessor@645] - Got user-level KeeperException when processing sessionid:0x14ed5005e750004 type:create cxid:0x15 zxid:0x1be txntype:-1 reqpath:n/a Error Path:/consumers/G1_T1_S1/owners/G1.T1/6 Error:KeeperErrorCode = NodeExists for /consumers/G1_T1_S1/owners/G1.T1/6
    2015-07-28 14:10:53,178 [myid:] - INFO  [ProcessThread(sid:0 cport:-1)::PrepRequestProcessor@645] - Got user-level KeeperException when processing sessionid:0x14ed5005e750004 type:create cxid:0x17 zxid:0x1bf txntype:-1 reqpath:n/a Error Path:/consumers/G1_T1_S1/owners/G1.T1/9 Error:KeeperErrorCode = NodeExists for /consumers/G1_T1_S1/owners/G1.T1/9
    2015-07-28 14:10:53,277 [myid:] - INFO  [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxnFactory@197] - Accepted socket connection from /172.17.0.10:53206
    2015-07-28 14:10:53,280 [myid:] - INFO  [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:ZooKeeperServer@868] - Client attempting to establish new session at /172.17.0.10:53206
    2015-07-28 14:10:53,346 [myid:] - INFO  [SyncThread:0:ZooKeeperServer@617] - Established session 0x14ed5005e750005 with negotiated timeout 10000 for client /172.17.0.10:53206
    2015-07-28 14:10:53,347 [myid:] - INFO  [ProcessThread(sid:0 cport:-1)::PrepRequestProcessor@645] - Got user-level KeeperException when processing sessionid:0x14ed5005e750004 type:create cxid:0x19 zxid:0x1c1 txntype:-1 reqpath:n/a Error Path:/consumers/G1_T1_S1/owners/G1.T1/8 Error:KeeperErrorCode = NodeExists for /consumers/G1_T1_S1/owners/G1.T1/8
    2015-07-28 14:10:53,357 [myid:] - INFO  [ProcessThread(sid:0 cport:-1)::PrepRequestProcessor@645] - Got user-level KeeperException when processing sessionid:0x14ed5005e750004 type:create cxid:0x1b zxid:0x1c2 txntype:-1 reqpath:n/a Error Path:/consumers/G1_T1_S1/owners/G1.T1/5 Error:KeeperErrorCode = NodeExists for /consumers/G1_T1_S1/owners/G1.T1/5
    2015-07-28 14:10:53,388 [myid:] - INFO  [ProcessThread(sid:0 cport:-1)::PrepRequestProcessor@645] - Got user-level KeeperException when processing sessionid:0x14ed5005e750004 type:create cxid:0x1d zxid:0x1c3 txntype:-1 reqpath:n/a Error Path:/consumers/G1_T1_S1/owners/G1.T1/7 Error:KeeperErrorCode = NodeExists for /consumers/G1_T1_S1/owners/G1.T1/7
    2015-07-28 14:10:53,509 [myid:] - INFO  [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxnFactory@197] - Accepted socket connection from /172.17.0.10:53209
    2015-07-28 14:10:53,510 [myid:] - INFO  [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:ZooKeeperServer@868] - Client attempting to establish new session at /172.17.0.10:53209
    2015-07-28 14:10:53,519 [myid:] - INFO  [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxnFactory@197] - Accepted socket connection from /172.17.0.10:53210
    2015-07-28 14:10:53,519 [myid:] - INFO  [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:ZooKeeperServer@868] - Client attempting to establish new session at /172.17.0.10:53210
    2015-07-28 14:10:53,542 [myid:] - INFO  [SyncThread:0:ZooKeeperServer@617] - Established session 0x14ed5005e750006 with negotiated timeout 40000 for client /172.17.0.10:53209
    2015-07-28 14:10:53,628 [myid:] - INFO  [SyncThread:0:ZooKeeperServer@617] - Established session 0x14ed5005e750007 with negotiated timeout 10000 for client /172.17.0.10:53210
    2015-07-28 14:10:54,929 [myid:] - INFO  [ProcessThread(sid:0 cport:-1)::PrepRequestProcessor@645] - Got user-level KeeperException when processing sessionid:0x14ed5005e750000 type:create cxid:0x12 zxid:0x1c6 txntype:-1 reqpath:n/a Error Path:/brokers/ids/9092 Error:KeeperErrorCode = NodeExists for /brokers/ids/9092
    2015-07-28 14:10:56,000 [myid:] - INFO  [SessionTracker:ZooKeeperServer@347] - Expiring session 0x14ed467dd920001, timeout of 7000ms exceeded
    2015-07-28 14:10:56,000 [myid:] - INFO  [SessionTracker:ZooKeeperServer@347] - Expiring session 0x14ed467dd920002, timeout of 7000ms exceeded
    2015-07-28 14:10:56,000 [myid:] - INFO  [SessionTracker:ZooKeeperServer@347] - Expiring session 0x14ed467dd920004, timeout of 7000ms exceeded
    2015-07-28 14:10:56,000 [myid:] - INFO  [SessionTracker:ZooKeeperServer@347] - Expiring session 0x14ed467dd920000, timeout of 6000ms exceeded
    2015-07-28 14:10:56,000 [myid:] - INFO  [ProcessThread(sid:0 cport:-1)::PrepRequestProcessor@494] - Processed session termination for sessionid: 0x14ed467dd920001
    2015-07-28 14:10:56,001 [myid:] - INFO  [ProcessThread(sid:0 cport:-1)::PrepRequestProcessor@494] - Processed session termination for sessionid: 0x14ed467dd920002
    2015-07-28 14:10:56,001 [myid:] - INFO  [ProcessThread(sid:0 cport:-1)::PrepRequestProcessor@494] - Processed session termination for sessionid: 0x14ed467dd920004
    2015-07-28 14:10:56,001 [myid:] - INFO  [ProcessThread(sid:0 cport:-1)::PrepRequestProcessor@494] - Processed session termination for sessionid: 0x14ed467dd920000
    2015-07-28 14:11:00,000 [myid:] - INFO  [SessionTracker:ZooKeeperServer@347] - Expiring session 0x14ed467dd920003, timeout of 10000ms exceeded
    2015-07-28 14:11:00,001 [myid:] - INFO  [SessionTracker:ZooKeeperServer@347] - Expiring session 0x14ed467dd920007, timeout of 10000ms exceeded
    2015-07-28 14:11:00,001 [myid:] - INFO  [SessionTracker:ZooKeeperServer@347] - Expiring session 0x14ed467dd920005, timeout of 10000ms exceeded
    2015-07-28 14:11:00,001 [myid:] - INFO  [ProcessThread(sid:0 cport:-1)::PrepRequestProcessor@494] - Processed session termination for sessionid: 0x14ed467dd920003
    2015-07-28 14:11:00,001 [myid:] - INFO  [ProcessThread(sid:0 cport:-1)::PrepRequestProcessor@494] - Processed session termination for sessionid: 0x14ed467dd920007
    2015-07-28 14:11:00,001 [myid:] - INFO  [ProcessThread(sid:0 cport:-1)::PrepRequestProcessor@494] - Processed session termination for sessionid: 0x14ed467dd920005
    2015-07-28 14:11:30,000 [myid:] - INFO  [SessionTracker:ZooKeeperServer@347] - Expiring session 0x14ed467dd920006, timeout of 40000ms exceeded
    2015-07-28 14:11:30,000 [myid:] - INFO  [ProcessThread(sid:0 cport:-1)::PrepRequestProcessor@494] - Processed session termination for sessionid: 0x14ed467dd920006
    
    opened by piorkowskiprzemyslaw 15
  • Exception for consumer offsets in zookeeper

    Exception for consumer offsets in zookeeper

    @adamdubiel

    Started seeing below exception in zookeeper, after which we cleaned up all the zookeeper and kafka nodes thinking that frontend persistence will take care of all the messages. After restart frontend has persisted all the messages (acknowledged by kafka brokers). However consumer doesen't seem to deliver those messages in frontend buffer. We are using kafka .8.2.2 and 085 hotfix version of hermes consumers.

    [myid:200] - INFO [ProcessThread(sid:200 cport:-1)::PrepRequestProcessor@651] - Got user-level KeeperException when processing sessionid:0xc857d38211b10017 type:setData cxid:0x116f zxid:0x1000019b5 txntype:-1 reqpath:n/a Error Path:/brokers/topics/__consumer_offsets/partitions/28/state Error:KeeperErrorCode = BadVersion for /brokers/topics/__consumer_offsets/partitions/28/state

    opened by hikrrish 14
  • Consumer is re-created after subscription is deleted

    Consumer is re-created after subscription is deleted

    After deleting a subscription and the consumer stopped as expected. The work load balance kicks in with a signal for a new assignment created for the already deleted subscription therefore a new consumer is started for new assignment. At least this is what I can figure from the logs here, I've also left some comments on the logs. What ends up in Zookeeper is that there is no node for the subscription yet there is an assignment node under the /runtime.

    The used configuration was on my local machine with a single instance of the consumer:

    • consumer.workload.algorithm=selective
    • consumer.workload.rebalance.interval.seconds=30
    • consumer.workload.consumers.per.subscription=1
    • consumer.workload.max.subscriptions.per.consumer=200
    opened by ghost 14
  • User can specify filter for outgoing events

    User can specify filter for outgoing events

    Consumers module should allow on specifying outgoing events filter. This expression would allow on writing simple matchers. Consumers sends only events that match the query.

    • how to define filter language? JSON Path? What about Avro?
    • how complicated can query be? i think for now simple field_path == value should be enough
    enhancement incubation 
    opened by adamdubiel 14
  • Consumer not able to deliver messages

    Consumer not able to deliver messages

    @adamdubiel

    We have a 3 node cluster and on one of the nodes and we say this error on one of the nodes today morning. 2017-02-03 4.575 ERROR p.a.t.h.c.c.r.k.KafkaSingleThreadedMessageReceiver - Error while reading message for subscription {s} org.apache.kafka.common.errors.SerializationException: Error deserializing key/value for partition abc at offset 73642 Caused by: java.lang.IllegalArgumentException: null at java.nio.Buffer.limit(Buffer.java:275) at org.apache.kafka.common.record.Record.sliceDelimited(Record.java:392) at org.apache.kafka.common.record.Record.value(Record.java:368) at org.apache.kafka.clients.consumer.internals.Fetcher.parseRecord(Fetcher.java:625) at org.apache.kafka.clients.consumer.internals.Fetcher.parseFetchedData(Fetcher.java:548) at org.apache.kafka.clients.consumer.internals.Fetcher.fetchedRecords(Fetcher.java:354) at org.apache.kafka.clients.consumer.KafkaConsumer.pollOnce(KafkaConsumer.java:1000) at org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:938) at pl.allegro.tech.hermes.consumers.consumer.receiver.kafka.KafkaSingleThreadedMessageReceiver.next(KafkaSingleThreadedMessageReceiver.java:92) at pl.allegro.tech.hermes.consumers.consumer.receiver.kafka.FilteringMessageReceiver.next(FilteringMessageReceiver.java:37) at pl.allegro.tech.hermes.consumers.consumer.SerialConsumer.consume(SerialConsumer.java:91) at pl.allegro.tech.hermes.consumers.supervisor.process.ConsumerProcess.run(ConsumerProcess.java:59) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745)

    I tried restarting consumers, but no luck

    opened by hikrrish 13
  • Road to 1.0.0

    Road to 1.0.0

    I think it is time we think about 1.0.0 release. At this point i think Hermes is feature complete and those features that are yet in incubation will surely mature soon, thanks to our early adopters.

    However, there are a few major, possibly breaking things that need to be done before we move into 1.x.x space in mostly non-functional area:

    • drop support for Kafka 0.8.x protocol and move to new implementation of Consumers - must have for 0.9.0 release
    • new, process-based implementation of Consumers already reviewed in #505
    • drop servlets abstraction in Frontend, also already in progress
    • create consistent naming convention for properties for all modules: Frontend, Management and Consumers and rename exisiting properties to match new convention (with some migration script to automate) (i.e. some properties include time unit like .seconds or .millis, others don't)
    • change modules & extensions approach - i.e. activating elasticsearch module should not require to build a new project and plug all things programmatically; we could probably leverage some classpath scanning - to discuss
    • choose one DI framework for all modules (including Management) - i think Dagger2 or Guice; hk2 is not fit for such a big project
    • reconsider using one tech stack for all modules; Management is too different from Consumers and Frontend which makes reusing components & documentation painful.

    The list is open for discussion. I would especially appreciate opinion from @sslavic and @gamefundas and @hikrrish as our known clients from outside Allegro. Of course @allegro/hermes is welcome to comment as well.

    roadmap 
    opened by adamdubiel 13
  • Getting SSL exception on starting consumers

    Getting SSL exception on starting consumers

    I have build a custom consumer as described in the documents (based on spring boot). I am getting below exceptions. I believe there are no SSL configurations for consumers. I am running frontend with SSL/http2 enabled

    java.lang.IllegalStateException: SSL doesn't have a valid keystore

    Complete logs : http://pastebin.com/gG2v0Ww6

    opened by hikrrish 13
  • Failed batch caused by client's 4xx does not update discarded meter

    Failed batch caused by client's 4xx does not update discarded meter

    If we have a batch subscription, with disabled retryClientErrors and get 4xx response code from a client, the discarded events meter is not updated. Subscription discarded meter should take into account that case too.

    bug 
    opened by faderskd 0
  • Batch subscription should be stoppable despite of pending message retries

    Batch subscription should be stoppable despite of pending message retries

    If you have a batch subscription sending messages to the receiver returning 5xx status codes, the messages are retried until TTL is passed. Sending with retries blocks BatchConsumer main loop, thus blocking processing consumer signals. If the subscription was stopped at the moment of retries, it will wait until retries are exhausted. This is not a desired behaviour. The subscription should be stopped immediately even if the are pending retries.

    bug 
    opened by faderskd 0
  • ERROR when deploying Hermes: in gradle Step

    ERROR when deploying Hermes: in gradle Step

    Hello, I'm getting the next error (screenshot attached below) when running "docker-compose up", don't know why doesn't find the file, I've downloaded the full project (https://github.com/allegro/hermes), any ideas?

    A problem occurred evaluating root project 'hermes'.
    > java.io.FileNotFoundException: /home/gradle/src/config (Is a directory)
    

    I'm executing "docker-compose up" command inside hermes-master/docker directory as shown in screenshot.

    image

    opened by emartinrda 0
  • Updated gradle to 6.9.3 and optimized build process

    Updated gradle to 6.9.3 and optimized build process

    Build Optimisations

    This PR provides set of optimisations that lead to faster build times and more robust local builds:

    1. Incremental builds are now supported (hermes-consumers/build.gradle, hermes-management/build.gradle)
    2. Parallel execution is enabled (org.gradle.parallel=true in gradle.properties)
    3. Tests are now executed in parallel (HermesSenderTest, ReactiveHermesSenderTest, IntegrationTest and ZookeeperBaseTest had to be modified to use random ports - as reusing the same port numbers prevented the tests to be successful when running in parallel mode)
    4. Gradle has been updated to 6.9.3 (latest 6.x version)

    Benchmarks

    The bencharks were performed on Apple M1 Max machine.

    Original build

    • ./gradlew clean assemble check took about 3m 11s (build scan)
    • subsequent ./gradlew clean assemble check took about 20s (build scan)

    Optimized build

    • ./gradlew clean assemble check took about 1m 14s (build scan)
    • subsequent ./gradlew clean assemble check took about 1s (build scan)

    Build cache

    The build cache was not enabled in the build scripts, but the builds can already benefit from it (e.g. when building locally):

    # clear the cache
    rm -rf ~/.gradle/caches/build-cache-1
    
    # first build
    ./gradlew clean build --build-cache
    ...
    BUILD SUCCESSFUL in 1m 16s
    157 actionable tasks: 157 executed
    
    # subsequent builds
    ./gradlew clean build --build-cache
    ...
    BUILD SUCCESSFUL in 5s
    157 actionable tasks: 78 executed, 79 from cache
    
    opened by ajordanow 0
Releases(hermes-2.2.8)
  • hermes-2.2.8(Dec 27, 2022)

    What's Changed

    Enhancements

    • Docker-compose working on m1 by @dswiecki in https://github.com/allegro/hermes/pull/1614
    • preserve testng test order by @moscicky in https://github.com/allegro/hermes/pull/1640
    • add logs to zookeeper container by @moscicky in https://github.com/allegro/hermes/pull/1641
    • failing test reporter by @moscicky in https://github.com/allegro/hermes/pull/1634
    • Configurable consumer inflight queue size by @moscicky in https://github.com/allegro/hermes/pull/1648

    Bugfixes

    • fix schema registry in docker compose by @moscicky in https://github.com/allegro/hermes/pull/1639
    • fix groovy version collision by @moscicky in https://github.com/allegro/hermes/pull/1646

    Full Changelog: https://github.com/allegro/hermes/compare/hermes-2.2.7...hermes-2.2.8

    Important changes

    This release changes the way in which inflight queue size for consumers is calculated.

    Prior to this change inflight queue size was calculated based on 2 values:

    • subscriptionInflightSize - set per subscription, default value: 100
    • globalInflightSize - global property configured by consumer.serialConsumer.inflightSize, default value: 100

    Final queue inflight size was calculated as min(subscriptionInflightSize, globalInflightSize).

    In this release default subscriptionInflightSize is set to null, and final size will be calculated as:

    subscriptionInflightSize != null ? subscriptionInflightSize : globalInflightSize
    

    subscriptionInflightSize will be configurable only by admin users from now on.

    Migration

    Deploying this version without any migration would cause inflight queue of subscriptions with subscriptionInflightSize greater than globalInflightSize to grow to subscriptionInflightSize.

    To prevent this the following migration is needed:

    1. For subscriptions with subscriptionInflightSize >= globalInflightSize, set subscriptionInflightSize=globalInflightSize
    2. Deploy hermes-management with new version
    3. Deploy hermes-consumers with new version
    4. For subscriptions with subscriptionInflightSize == globalInflightSize, set subscriptionInflightSize=null

    Rollback

    Changes are safe to rollback.

    Source code(tar.gz)
    Source code(zip)
  • hermes-2.2.7(Dec 27, 2022)

    What's Changed

    Enhancements

    • Improve http client monitoring by @moscicky in https://github.com/allegro/hermes/pull/1632

    Full Changelog: https://github.com/allegro/hermes/compare/hermes-2.2.6...hermes-2.2.7

    Source code(tar.gz)
    Source code(zip)
  • hermes-2.2.6(Dec 27, 2022)

    What's Changed

    Enhancements

    • Broadcast rate limiting by @moscicky in https://github.com/allegro/hermes/pull/1613

    Full Changelog: https://github.com/allegro/hermes/compare/hermes-2.2.5...hermes-2.2.6

    Source code(tar.gz)
    Source code(zip)
  • hermes-2.2.5(Dec 27, 2022)

    What's Changed

    Enhancements

    • Updated curator to 5.4.0 by @piotrrzysko in https://github.com/allegro/hermes/pull/1629

    Full Changelog: https://github.com/allegro/hermes/compare/hermes-2.2.4...hermes-2.2.5

    Source code(tar.gz)
    Source code(zip)
  • hermes-2.2.4(Dec 2, 2022)

    What's Changed

    Enhancements

    • Change http client for batch subscriptions by @moscicky in https://github.com/allegro/hermes/pull/1620
    • add gradle wrapper validation by @moscicky in https://github.com/allegro/hermes/pull/1628

    Full Changelog: https://github.com/allegro/hermes/compare/hermes-2.2.3...hermes-2.2.4

    Source code(tar.gz)
    Source code(zip)
  • hermes-2.2.3(Dec 2, 2022)

    What's Changed

    Enhancements

    • Added possibility to configure partition assignment strategy by @piotrrzysko in https://github.com/allegro/hermes/pull/1623
    • Pubsub message compression by @michalferlinski in https://github.com/allegro/hermes/pull/1558
    • Metrics about undelivered messages by @piotrrzysko in https://github.com/allegro/hermes/pull/1625

    Bugfixes

    • Fix event auditor by @moscicky in https://github.com/allegro/hermes/pull/1626

    New Contributors

    • @michalferlinski made their first contribution in https://github.com/allegro/hermes/pull/1558

    Full Changelog: https://github.com/allegro/hermes/compare/hermes-2.2.2...hermes-2.2.3

    Source code(tar.gz)
    Source code(zip)
  • hermes-2.2.2(Nov 28, 2022)

    What's Changed

    Enhancements

    • resolve #1578 | update curator to latest version by @kirsteend in https://github.com/allegro/hermes/pull/1592

    New Contributors

    • @kirsteend made their first contribution in https://github.com/allegro/hermes/pull/1592

    Full Changelog: https://github.com/allegro/hermes/compare/hermes-2.2.1...hermes-2.2.2

    Source code(tar.gz)
    Source code(zip)
  • hermes-2.2.1(Nov 28, 2022)

    What's Changed

    Enhancements

    • bump kafka-clients to 2.8.2 by @moscicky in https://github.com/allegro/hermes/pull/1617

    Full Changelog: https://github.com/allegro/hermes/compare/hermes-2.2.0...hermes-2.2.1

    Source code(tar.gz)
    Source code(zip)
  • hermes-2.2.0(Nov 4, 2022)

    What's Changed

    Enhancements

    • Add option to skip events in hermes console by @sobelek in https://github.com/allegro/hermes/pull/1579
    • Allow to reset selected received requests in Hermes Mock by @platan in https://github.com/allegro/hermes/pull/1589
    • resolve #1599 | checkstyle for unit tests by @arrekb in https://github.com/allegro/hermes/pull/1604
    • Hermes-frontend - metrics redesign by @szczygiel-m in https://github.com/allegro/hermes/pull/1595
    • Moved away from jcenter by @piotrrzysko in https://github.com/allegro/hermes/pull/1611
    • resolve #1601 | Add checkstyle for benchmarks by @iremugurlu in https://github.com/allegro/hermes/pull/1610
    • Issue #1600 | Add checkstyle to integration tests by @tarczynskitomek in https://github.com/allegro/hermes/pull/1609
    • Easy local development by @faderskd in https://github.com/allegro/hermes/pull/1598

    Bugfixes

    • Fix checkstyle action by @piotrrzysko in https://github.com/allegro/hermes/pull/1602

    New Contributors

    • @arrekb made their first contribution in https://github.com/allegro/hermes/pull/1604
    • @iremugurlu made their first contribution in https://github.com/allegro/hermes/pull/1610
    • @tarczynskitomek made their first contribution in https://github.com/allegro/hermes/pull/1609

    Full Changelog: https://github.com/allegro/hermes/compare/hermes-2.1.6...hermes-2.2.0

    Source code(tar.gz)
    Source code(zip)
  • hermes-2.1.6(Oct 21, 2022)

    What's Changed

    Enhancements

    • update docs by @moscicky in https://github.com/allegro/hermes/pull/1574
    • Pubsub ability to not include additional headers by @wikp in https://github.com/allegro/hermes/pull/1567
    • add ability to run integration tests on m1 mac by @moscicky in https://github.com/allegro/hermes/pull/1576
    • Remove deprecated max rate negotiation docs by @sobelek in https://github.com/allegro/hermes/pull/1580
    • Replaced Gitter with GitHub discussions by @piotrrzysko in https://github.com/allegro/hermes/pull/1582
    • Migrate to material theme by @kasmar00 in https://github.com/allegro/hermes/pull/1585
    • add support for confluent images > 5.x.x by @moscicky in https://github.com/allegro/hermes/pull/1586
    • Add codeowners by @kasmar00 in https://github.com/allegro/hermes/pull/1590
    • Remove release hooks and version from console package.json by @kasmar00 in https://github.com/allegro/hermes/pull/1587
    • Removed unnecessary feature flags by @piotrrzysko in https://github.com/allegro/hermes/pull/1594
    • register metadata-age and record-queue-time-max producer metrics by @moscicky in https://github.com/allegro/hermes/pull/1597
    • Java checkstyle (#1563) by @kszapsza in https://github.com/allegro/hermes/pull/1577

    Bugfixes

    • Fix automatic generation of release notes by @pitagoras3 in https://github.com/allegro/hermes/pull/1573
    • Fix fragile float based ScoringTargetWeightCalculatorTest by @moscicky in https://github.com/allegro/hermes/pull/1575
    • Restore assertion in OfflineRetransmissionManagementTest by @platan in https://github.com/allegro/hermes/pull/1570
    • Fixed Consistency view by @sobelek in https://github.com/allegro/hermes/pull/1571
    • #1433 Wait for readiness of all Zookeeper clusters by @michal494 in https://github.com/allegro/hermes/pull/1593

    New Contributors

    • @moscicky made their first contribution in https://github.com/allegro/hermes/pull/1575
    • @kasmar00 made their first contribution in https://github.com/allegro/hermes/pull/1585
    • @kszapsza made their first contribution in https://github.com/allegro/hermes/pull/1577

    Full Changelog: https://github.com/allegro/hermes/compare/hermes-2.1.5...hermes-2.1.6

    Source code(tar.gz)
    Source code(zip)
  • hermes-2.1.5(Oct 14, 2022)

    What's Changed

    Enhancements

    • Unregistering stale subscription metrics by @piotrrzysko in https://github.com/allegro/hermes/pull/1568

    Full Changelog: https://github.com/allegro/hermes/compare/hermes-2.1.4...hermes-2.1.5

    Source code(tar.gz)
    Source code(zip)
  • hermes-2.1.4(Sep 30, 2022)

    What's Changed

    Enhancements

    • Introduce ZK secure mode: Set Zookeeper ACLs on all created nodes by @szczygiel-m in https://github.com/allegro/hermes/pull/1561

    Full Changelog: https://github.com/allegro/hermes/compare/hermes-2.1.3...hermes-2.1.4

    Source code(tar.gz)
    Source code(zip)
  • hermes-2.1.3(Sep 30, 2022)

    What's Changed

    Enhancements

    • Weighted work balancer: dynamic consumer weights adjustment by @piotrrzysko in https://github.com/allegro/hermes/pull/1559

    Full Changelog: https://github.com/allegro/hermes/compare/hermes-2.1.2...hermes-2.1.3

    Source code(tar.gz)
    Source code(zip)
  • hermes-2.1.2(Sep 16, 2022)

    What's Changed

    Enhancements

    • Extracting messageId and timestamp from Kafka headers by @druminski in https://github.com/allegro/hermes/pull/1560

    Bugfixes

    • Fix release notes generation by @pitagoras3 in https://github.com/allegro/hermes/pull/1551
    • hermes-frontend configuration fixes by @piotrrzysko in https://github.com/allegro/hermes/pull/1557

    Full Changelog: https://github.com/allegro/hermes/compare/hermes-2.1.1...hermes-2.1.2

    Source code(tar.gz)
    Source code(zip)
  • hermes-2.1.1(Sep 6, 2022)

    What's Changed

    Enhancements

    • Added new field to subscription - autoDeleteWithTopic by @Diewa in https://github.com/allegro/hermes/pull/1547

    Bugfixes

    • Fixed exceptions catching while closing consumer by @szczygiel-m in https://github.com/allegro/hermes/pull/1552
    • Add schemaId and schemaVersion to backupMessages by @faderskd in https://github.com/allegro/hermes/pull/1554

    Full Changelog: https://github.com/allegro/hermes/compare/hermes-2.1.0...hermes-2.1.1

    Source code(tar.gz)
    Source code(zip)
  • hermes-2.1.0(Aug 25, 2022)

    What's Changed

    Enhancements

    • Made the work balancer pluggable by @piotrrzysko in https://github.com/allegro/hermes/pull/1546

      Changed the following metrics paths (removed selective from the paths):

      • consumer.{hostname}.consumers-workload.{kafkaCluster}.selective.rebalance-duration -> consumer.{hostname}.consumers-workload.{kafkaCluster}.rebalance-duration
      • consumer.{hostname}.consumers-workload.{kafkaCluster}.selective.all-assignments -> consumer.{hostname}.consumers-workload.{kafkaCluster}.all-assignments
      • consumer.{hostname}.consumers-workload.{kafkaCluster}.selective.missing-resources -> consumer.{hostname}.consumers-workload.{kafkaCluster}.missing-resources
      • consumer.{hostname}.consumers-workload.{kafkaCluster}.selective.deleted-assignments -> consumer.{hostname}.consumers-workload.{kafkaCluster}.deleted-assignments
      • consumer.{hostname}.consumers-workload.{kafkaCluster}.selective.created-assignments -> consumer.{hostname}.consumers-workload.{kafkaCluster}.created-assignments
    • Consumer load metrics for weighted work balancer by @piotrrzysko in https://github.com/allegro/hermes/pull/1548

    • [SECURITY] Use HTTPS to resolve dependencies in Gradle Build by @JLLeitschuh in https://github.com/allegro/hermes/pull/1544

    • Reduced log noise generated by maxrate and workload modules by @piotrrzysko in https://github.com/allegro/hermes/pull/1550

    • Weighted work balancer by @piotrrzysko in https://github.com/allegro/hermes/pull/1549

    New Contributors

    • @JLLeitschuh made their first contribution in https://github.com/allegro/hermes/pull/1544

    Full Changelog: https://github.com/allegro/hermes/compare/hermes-2.0.0...hermes-2.1.0

    Source code(tar.gz)
    Source code(zip)
  • hermes-2.0.0(Aug 3, 2022)

    What's Changed

    Enhancements

    • Automatically generate release notes by @pitagoras3 in https://github.com/allegro/hermes/pull/1542
    • Remove avro deserialization fallback ratelimit removed by @faderskd in https://github.com/allegro/hermes/pull/1541
    • Removing archaius from hermes by @szczygiel-m in https://github.com/allegro/hermes/pull/1508
    • Updated docs after migration from archaius to spring config by @szczygiel-m in https://github.com/allegro/hermes/pull/1545

    Other Changes

    • Removed vagrant, rely only on docker by @szczygiel-m in https://github.com/allegro/hermes/pull/1543

    Full Changelog: https://github.com/allegro/hermes/compare/hermes-1.14.1...hermes-2.0.0

    Source code(tar.gz)
    Source code(zip)
  • hermes-1.14.1(Jul 20, 2022)

    Enhancements

    (1446) Add tracking of Hermes publishing headers

    (1534) Allow any logged-in user to remove empty groups

    Bugfixes

    (1535) Fixed SubscriptionPartition equals/hashcode bug, and isCriticalEnvironment variable in management

    Source code(tar.gz)
    Source code(zip)
  • hermes-1.14.0(Jul 20, 2022)

    Enhancements

    (1492) Development environment for testing changes locally

    (1517) Allow selected subscribers to bypass subscribing restrictions

    (1511) Renamed the isDangerousEnvironment parameter to isCriticalEnvironment

    (1515) Removed the supportTeam field from groups and subscriptions

    (1523) Removed hermes-tracker-mongo

    (1510) Removed hierarchical registries

    This change switches the default implementation of registries (internal data structures used by Hermes) to the one introduced in 1110 and 1086.

    After deploying a Hermes version that includes this change, you can manually remove the following zookeeper nodes (if they exist):

    • {zookeeper.root}/consumers-workload/{kafka.cluster.name}/runtime
    • {zookeeper.root}/consumers-rate/{kafka.cluster.name}/runtime
    Source code(tar.gz)
    Source code(zip)
  • hermes-1.13.0(Jul 20, 2022)

    Enhancements

    (1507) Added environment label in hermes-console

    (1498) Migrated hermes-frontend to the Spring framework

    If you add custom implementations of some parts of hermes-frontend, please take a look at 1506. The documentation changes introduced there should help you
    migrate to version 1.13.0.

    Thanks to @pmajorczyk-allegro for this contribution!

    Source code(tar.gz)
    Source code(zip)
  • hermes-1.12.4(Jul 20, 2022)

    Enhancements

    (1503) Check if topic group exists before topic creation

    (1501) Bumped json2avro to 0.2.14

    (1479) Refactor: kafka message conversion extracted from topic consuming logic

    Thanks @arkadius for this contribution!

    (1502) Add fix delay to stub created with ValueMatcher in hermes mock

    Thanks @sobelek for this contribution!

    (1497) Increased the width of the search bar for issue #1488

    Thanks @zzzzz1st for this contribution!

    Fixes

    (1495) Fix dead links in documentation - fixes #1494

    Thanks @AleksanderBrzozowski for this contribution!

    Source code(tar.gz)
    Source code(zip)
  • hermes-1.12.3(Jul 20, 2022)

    Enhancements

    (1472) Allow matching avro field while defining wiremock stubs

    Thanks @sobelek for this contribution!

    (1480) Refactor: Message to kafka ProducerRecord conversion logic extracted from KafkaBrokerMessageProducer

    (1477) Feature flag determining if __metadata field is required for avro content type

    Thanks @arkadius for these contributions!

    (1486) Added audit logs for subscription retransmission

    Fixes

    (1491) Topic owner is not allowed to create a subscription with any owner

    Source code(tar.gz)
    Source code(zip)
  • hermes-1.12.2(Jul 20, 2022)

  • hermes-1.12.1(Jul 20, 2022)

  • hermes-1.12.0(Jul 20, 2022)

  • hermes-1.11.2(Jul 20, 2022)

  • hermes-1.11.1(Jul 20, 2022)

    Enhancements

    (1461) Waiting for KafkaFeature during topic creation

    (1462) Test living audit events endpoint

    (1465) Choosing owner from autocomplete source prevents manual entries

    Fixes

    (1466) Use one object mapper across hermes-management

    Source code(tar.gz)
    Source code(zip)
  • hermes-1.11.0(Jul 20, 2022)

    Enhancements

    (1457) Migrated hermes-consumers to the Spring framework

    If you add custom implementations of some parts of hermes-consumers, please take a look at 1459. The documentation changes introduced there should help you
    migrate to version 1.11.0.

    Thanks to @pmajorczyk-allegro for this contribution!

    Source code(tar.gz)
    Source code(zip)
  • hermes-1.10.2(Jul 20, 2022)

    Enhancements

    (1443) Update hermes-mock and retransmission documentation

    (1448) Packages signing enabled only when env var GPG_KEY_ID is set

    (1449) Added possibility to define a clickable link to topic and subscription owner

    (1451) Fix handling maps in matchers

    Thanks to @wpanas for this contribution!

    (1450) Create codeql-analysis.yml

    (1453) Update and rename codeql-analysis.yml to .github/workflows/codeql-ana…

    Thanks to @bgalek for these contributions!

    Fixes

    (1445) Fixed Message preview modal

    (1447) Fixed issue with inconsistent KafkaRawMessageReader reads resulting in both 200s and 404s.

    (1452) Fix some CI failures

    Source code(tar.gz)
    Source code(zip)
  • hermes-1.10.1(Jul 20, 2022)

    Enhancements

    (1438) Add offline retransmission auditor

    (1437) Replaced custom fetcher for offline-clients with iframe

    (1440) Added object details to auditors while removing objects

    (1434) Add avro schema viewer

    Thanks to @stanczykj for this contribution!

    (1441) Added button blocking until backend responds

    Fixes

    (1435) Fixed bug in hermes-mock causing HermesMockException

    (1439) When junit-reporter fails to report, action continues to work

    Source code(tar.gz)
    Source code(zip)
Owner
Allegro Tech
Allegro Tech Open Source Projects
Allegro Tech
An Open-Source, Distributed MQTT Message Broker for IoT.

MMQ broker MMQ broker 是一款完全开源,高度可伸缩,高可用的分布式 MQTT 消息服务器,适用于 IoT、M2M 和移动应用程序。 MMQ broker 完整支持MQTT V3.1 和 V3.1.1。 安装 MMQ broker 是跨平台的,支持 Linux、Unix、macOS

Solley 60 Dec 15, 2022
Efficient reliable UDP unicast, UDP multicast, and IPC message transport

Aeron Efficient reliable UDP unicast, UDP multicast, and IPC message transport. Java and C++ clients are available in this repository, and a .NET clie

Real Logic 6.3k Jan 9, 2023
A distributed event bus that implements a RESTful API abstraction on top of Kafka-like queues

Nakadi Event Broker Nakadi is a distributed event bus broker that implements a RESTful API abstraction on top of Kafka-like queues, which can be used

Zalando SE 866 Dec 21, 2022
Dagger is an easy-to-use, configuration over code, cloud-native framework built on top of Apache Flink for stateful processing of real-time streaming data.

Dagger Dagger or Data Aggregator is an easy-to-use, configuration over code, cloud-native framework built on top of Apache Flink for stateful processi

Open DataOps Foundation 238 Dec 22, 2022
A template and introduction for the first kafka stream application. The readme file contains all the required commands to run the Kafka cluster from Scrach

Kafka Streams Template Maven Project This project will be used to create the followings: A Kafka Producer Application that will start producing random

null 2 Jan 10, 2022
Demo project for Kafka Ignite streamer, Kafka as source and Ignite cache as sink

ignite-kafka-streamer **Description : Demo project for Kafka Ignite streamer, Kafka as source and Ignite cache as sink Step-1) Run both Zookeeper and

null 1 Feb 1, 2022
Kafka example - a simple producer and consumer for kafka using spring boot + java

Kafka example - a simple producer and consumer for kafka using spring boot + java

arturcampos 1 Feb 18, 2022
Microservice-based online payment system for customers and merchants using RESTful APIs and message queues

Microservice-based online payment system for customers and merchants using RESTful APIs and message queues

Daniel Larsen 1 Mar 23, 2022
An example Twitch.tv bot that allows you to manage channel rewards (without requiring a message), and chat messages.

Twitch Bot Example shit code that can be used as a template for a twitch bot that takes advantage of channel rewards (that dont require text input) an

Evan 3 Nov 3, 2022
Firehose is an extensible, no-code, and cloud-native service to load real-time streaming data from Kafka to data stores, data lakes, and analytical storage systems.

Firehose - Firehose is an extensible, no-code, and cloud-native service to load real-time streaming data from Kafka to data stores, data lakes, and analytical storage systems.

Open DataOps Foundation 279 Dec 22, 2022
Dataflow template which read data from Kafka (Support SSL), transform, and outputs the resulting records to BigQuery

Kafka to BigQuery Dataflow Template The pipeline template read data from Kafka (Support SSL), transform the data and outputs the resulting records to

DoiT International 12 Jun 1, 2021
KC4Streams - a simple Java library that provides utility classes and standard implementations for most of the Kafka Streams pluggable interfaces

KC4Streams (which stands for Kafka Commons for Streams) is a simple Java library that provides utility classes and standard implementations for most of the Kafka Streams pluggable interfaces.

StreamThoughts 2 Mar 2, 2022
Output Keycloak Events and Admin Events to a Kafka topic.

keycloak-kafka-eventlistener Output Keycloak Events and Admin Events to a Kafka topic. Based on Keycloak 15.0.2+ / RH-SSO 7.5.0+ How to use the plugin

Dwayne Du 4 Oct 10, 2022
Mirror of Apache Kafka

Apache Kafka See our web site for details on the project. You need to have Java installed. We build and test Apache Kafka with Java 8, 11 and 15. We s

The Apache Software Foundation 23.9k Jan 5, 2023
Kryptonite is a turn-key ready transformation (SMT) for Apache Kafka® Connect to do field-level 🔒 encryption/decryption 🔓 of records. It's an UNOFFICIAL community project.

Kryptonite - An SMT for Kafka Connect Kryptonite is a turn-key ready transformation (SMT) for Apache Kafka® to do field-level encryption/decryption of

Hans-Peter Grahsl 53 Jan 3, 2023
A command line client for Kafka Connect

kcctl -- A CLI for Apache Kafka Connect This project is a command-line client for Kafka Connect. Relying on the idioms and semantics of kubectl, it al

Gunnar Morling 274 Dec 19, 2022
A command line client for Kafka Connect

?? kcctl – Your Cuddly CLI for Apache Kafka Connect This project is a command-line client for Kafka Connect. Relying on the idioms and semantics of ku

kcctl 274 Dec 19, 2022
Publish Kafka messages from HTTP

Kafka Bridge Publish Kafka messages from HTTP Configuration Example configuration for commonly used user + password authentication: kafka-bridge: ka

neuland - Büro für Informatik 4 Nov 9, 2021
Implementação de teste com Kafka

TesteKafka01 Implementação de teste com Kafka Projeto criado para estudo e testes com Kafka Recursos que estarão disponiveis: -Envio de msg -Recebe Ms

null 3 Sep 17, 2021