RocketMQ-on-Pulsar - A protocol handler that brings native RocketMQ protocol to Apache Pulsar

Related tags

Messaging rop
Overview

RocketMQ on Pulsar(RoP)

RoP stands for RocketMQ on Pulsar. Rop broker supports RocketMQ-4.6.1 protocol, and is backed by Pulsar.

RoP is implemented as a Pulsar ProtocolHandler with protocol name "rocketmq". ProtocolHandler is build as a nar file, and is loaded when Pulsar Broker starts.

Supported

RoP is implemented based on Pulsar features. Currently, the functions supported by RoP are as follows:

  • Send and Receive Messages
  • SendAsync Messages
  • Queue Selector Producer
  • Round Robin Producer
  • Producer And Consumer(Push and Pull) With Namespace
  • Batch Messages
  • Order Messages
  • Send And Receive With Tag
  • Deliver Level Message
  • Retry Topic
  • DLQ Topic
  • Broadcast Consumer

Get started

In this guide, you will learn how to use the Pulsar broker to serve requests from RocketMQ client.

Download Pulsar

Download Pulsar 2.7.1 binary package apache-pulsar-2.7.1-bin.tar.gz. and unzip it.

Note: Currently, RoP is only compatible with Apache Pulsar 2.7.0 and above.

Download and Build RoP Plugin

You can download rop nar file from the RoP sources.

To build from code, complete the following steps:

  1. Clone the project from GitHub to your local.
git clone https://github.com/streamnative/rop.git
cd rop
  1. Build the project.
mvn clean install -DskipTests

You can find the nar file in the following directory.

./target/pulsar-protocol-handler-rocketmq-${version}.nar

Configuration

Name Description Default
rocketmqTenant RocketMQ on Pulsar broker tenant rocketmq
rocketmqMetadataTenant The tenant used for storing Rocketmq metadata topics rocketmq
rocketmqNamespace Rocketmq on Pulsar Broker namespace default
rocketmqMetadataNamespace The namespace used for storing rocket metadata topics __rocketmq
rocketmqListeners RocketMQ service port rocketmq://127.0.0.1:9876
rocketmqMaxNoOfChannels The maximum number of channels which can exist concurrently on a connection 64
rocketmqMaxFrameSize The maximum frame size on a connection 4194304 (4MB)
rocketmqHeartBeat The default heartbeat timeout of RoP connection 60 (s)

Configure Pulsar broker to run RoP protocol handler as Plugin

As mentioned above, RoP module is loaded with Pulsar broker. You need to add configs in Pulsar's config file, such as broker.conf or standalone.conf.

  1. Protocol handler configuration

You need to add messagingProtocols(the default value is null), protocolHandlerDirectory ( the default value is "./protocols") and loadManagerClassName=org.apache.pulsar.broker.loadbalance.impl.ModularLoadManagerImpl, in Pulsar configuration files, such as broker.conf or standalone.conf. For RoP, the value for messagingProtocols is rocketmq; the value for protocolHandlerDirectory is the directory of RoP nar file.

The following is an example.

messagingProtocols=rocketmq
protocolHandlerDirectory=./protocols
loadManagerClassName=org.apache.pulsar.broker.loadbalance.impl.ModularLoadManagerImpl
  1. Set RocketMQ service listeners

Set RocketMQ service listeners. Note that the hostname value in listeners is the same as Pulsar broker's advertisedListeners.

The following is an example.

rocketmqListeners=rocketmq://127.0.0.1:9876
advertisedListeners=INTERNAL:pulsar://127.0.0.1:6650,INTERNAL:pulsar+ssl://127.0.0.1:6651,INTERNAL_ROP:pulsar://127.0.0.1:9876,INTERNAL_ROP:pulsar+ssl://127.0.0.1:9896
rocketmqListenerPortMap=9876:INTERNAL_ROP

Note: advertisedListeners and advertisedAddress cannot be configured at the same time.

Run Pulsar broker

With the above configuration, you can start your Pulsar broker. For details, refer to Pulsar Get started guides.

cd apache-pulsar-2.7.1
bin/pulsar standalone -nss -nfw

Run RocketMQ Client to verify

In the RoP repo, we provide a sub model of examples, which contains a variety of scenarios used by the rocketmq client. You can run these examples directly in the IDE. Or you can download the RocketMQ src code and run RocketMQ client of examples.

Log level configuration

In Pulsar log4j2.yaml config file, you can set RoP log level.

The following is an example.

    Logger:
      - name: RocketMQProtocolHandler
        level: debug
        additivity: false
        AppenderRef:
          - ref: Console
Comments
  • [Bug] failed to produce message with rocketMQ producer

    [Bug] failed to produce message with rocketMQ producer

    I deploy rop as a plugin in pulsar standalone server, and I produce messages by a rocketMQ producer. by command sh bin/tools.sh org.apache.rocketmq.example.quickstart.Producer and it got No route info of this topic: TopicTest error.

    • Pulsar version: 2.8.0
    • Rop version: 0.1.0
    • RocketMQ Producer Version: 4.9.0

    The stack trace of rocketMQ producer is

    org.apache.rocketmq.client.exception.MQClientException: No route info of this topic: TopicTest
    See http://rocketmq.apache.org/docs/faq/ for further details.
            at org.apache.rocketmq.client.impl.producer.DefaultMQProducerImpl.sendDefaultImpl(DefaultMQProducerImpl.java:694)
            at org.apache.rocketmq.client.impl.producer.DefaultMQProducerImpl.send(DefaultMQProducerImpl.java:1384)
            at org.apache.rocketmq.client.impl.producer.DefaultMQProducerImpl.send(DefaultMQProducerImpl.java:1328)
            at org.apache.rocketmq.client.producer.DefaultMQProducer.send(DefaultMQProducer.java:330)
            at org.apache.rocketmq.example.quickstart.Producer.main(Producer.java:67)
    

    And there are error logs in pulsar server output, full log is

    11:09:31.781 [AdminBrokerThread_11] WARN  org.streamnative.pulsar.handlers.rocketmq.inner.namesvr.NameserverProcessor - fetch topic address of topic[TBW102] error.
    java.lang.ClassCastException: org.apache.pulsar.broker.loadbalance.NoopLoadManager cannot be cast to org.apache.pulsar.broker.loadbalance.impl.ModularLoadManagerWrapper
            at org.streamnative.pulsar.handlers.rocketmq.inner.namesvr.NameserverProcessor.getModularLoadManagerImpl(NameserverProcessor.java:328) ~[pulsar-protocol-handler-rocketmq-0.1.0.nar-unpacked/:?]
            at org.streamnative.pulsar.handlers.rocketmq.inner.namesvr.NameserverProcessor.getBrokerAddressByListenerName(NameserverProcessor.java:299) ~[pulsar-protocol-handler-rocketmq-0.1.0.nar-unpacked/:?]
            at org.streamnative.pulsar.handlers.rocketmq.inner.namesvr.NameserverProcessor.handleTopicMetadata(NameserverProcessor.java:195) [pulsar-protocol-handler-rocketmq-0.1.0.nar-unpacked/:?]
            at org.streamnative.pulsar.handlers.rocketmq.inner.namesvr.NameserverProcessor.processRequest(NameserverProcessor.java:110) [pulsar-protocol-handler-rocketmq-0.1.0.nar-unpacked/:?]
            at org.streamnative.pulsar.handlers.rocketmq.inner.NettyRemotingAbstract$1.run(NettyRemotingAbstract.java:205) [pulsar-protocol-handler-rocketmq-0.1.0.nar-unpacked/:?]
            at org.apache.rocketmq.remoting.netty.RequestTask.run(RequestTask.java:80) [rocketmq-remoting-4.6.1.jar:4.6.1]
            at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) [?:1.8.0_192]
            at java.util.concurrent.FutureTask.run(FutureTask.java:266) [?:1.8.0_192]
            at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [?:1.8.0_192]
            at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [?:1.8.0_192]
            at java.lang.Thread.run(Thread.java:748) [?:1.8.0_192]
    

    When I check the source code of rop, it shows that NoopLoadManager implements LoadManager and LoadManager is not sub-class of ModularLoadManager.

    opened by xiaozongyang 2
  • Bump jackson-databind from 2.12.1 to 2.13.4.1

    Bump jackson-databind from 2.12.1 to 2.13.4.1

    Bumps jackson-databind from 2.12.1 to 2.13.4.1.

    Commits

    Dependabot compatibility score

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
    • @dependabot use these labels will set the current labels as the default for future PRs for this repo and language
    • @dependabot use these reviewers will set the current reviewers as the default for future PRs for this repo and language
    • @dependabot use these assignees will set the current assignees as the default for future PRs for this repo and language
    • @dependabot use this milestone will set the current milestone as the default for future PRs for this repo and language

    You can disable automated security fix PRs for this repo from the Security Alerts page.

    dependencies 
    opened by dependabot[bot] 1
  • Bump jackson-databind from 2.12.1 to 2.12.6.1

    Bump jackson-databind from 2.12.1 to 2.12.6.1

    Bumps jackson-databind from 2.12.1 to 2.12.6.1.

    Commits

    Dependabot compatibility score

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
    • @dependabot use these labels will set the current labels as the default for future PRs for this repo and language
    • @dependabot use these reviewers will set the current reviewers as the default for future PRs for this repo and language
    • @dependabot use these assignees will set the current assignees as the default for future PRs for this repo and language
    • @dependabot use this milestone will set the current milestone as the default for future PRs for this repo and language

    You can disable automated security fix PRs for this repo from the Security Alerts page.

    dependencies 
    opened by dependabot[bot] 1
  • feat:support process consumerSendMsgBack as throw exception,

    feat:support process consumerSendMsgBack as throw exception,

    consumerSendMsgBack will read from pulsar now, it will be slow in some scene

    we can allow users to throw exception,then rocketmq will resend the message as a normal message.

    opened by leizhiyuan 1
  • Bump checkstyle from 6.19 to 8.29

    Bump checkstyle from 6.19 to 8.29

    Bumps checkstyle from 6.19 to 8.29.

    Release notes

    Sourced from checkstyle's releases.

    checkstyle-8.29

    https://checkstyle.org/releasenotes.html#Release_8.29

    checkstyle-8.28

    https://checkstyle.org/releasenotes.html#Release_8.28

    checkstyle-8.27

    https://checkstyle.org/releasenotes.html#Release_8.27

    checkstyle-8.26

    https://checkstyle.org/releasenotes.html#Release_8.26

    checkstyle-8.25

    https://checkstyle.org/releasenotes.html#Release_8.25

    checkstyle-8.24

    https://checkstyle.org/releasenotes.html#Release_8.24

    checkstyle-8.23

    https://checkstyle.org/releasenotes.html#Release_8.23

    checkstyle-8.22

    https://checkstyle.org/releasenotes.html#Release_8.22

    checkstyle-8.21

    https://checkstyle.org/releasenotes.html#Release_8.21

    checkstyle-8.20

    https://checkstyle.org/releasenotes.html#Release_8.20

    checkstyle-8.19

    https://checkstyle.org/releasenotes.html#Release_8.19

    checkstyle-8.18

    https://checkstyle.org/releasenotes.html#Release_8.18

    checkstyle-8.17

    https://checkstyle.org/releasenotes.html#Release_8.17

    checkstyle-8.16

    https://checkstyle.org/releasenotes.html#Release_8.16

    checkstyle-8.15

    https://checkstyle.org/releasenotes.html#Release_8.15

    checkstyle-8.14

    http://checkstyle.sourceforge.net/releasenotes.html#Release_8.14

    checkstyle-8.13

    http://checkstyle.sourceforge.net/releasenotes.html#Release_8.13

    ... (truncated)

    Commits
    • 8933d03 [maven-release-plugin] prepare release checkstyle-8.29
    • bd45909 Issue #7487: refactor code to use DetailAST.hasChildren()
    • 317e51f Issue #7487: add method hasChildren() to DetailAST
    • 89b4dcd Issue #3238: Java 8 Grammar: annotations on arrays and varargs
    • 252cd89 dependency: bump junit-pioneer from 0.5.1 to 0.5.2
    • 2ee2615 dependency: bump junit.version from 5.5.2 to 5.6.0
    • 4ed7cb8 minor: add space before xml comment end '-->' to ease reading and make links ...
    • c46a16d Issue #7468: disable 'external-parameter-entities' feature by default
    • dfed794 minor: add missing test case to SuperCloneCheckTest
    • 24e7bdf dependency: bump antlr4.version from 4.7.2 to 4.8-1
    • Additional commits viewable in compare view

    Dependabot compatibility score

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
    • @dependabot use these labels will set the current labels as the default for future PRs for this repo and language
    • @dependabot use these reviewers will set the current reviewers as the default for future PRs for this repo and language
    • @dependabot use these assignees will set the current assignees as the default for future PRs for this repo and language
    • @dependabot use this milestone will set the current milestone as the default for future PRs for this repo and language

    You can disable automated security fix PRs for this repo from the Security Alerts page.

    dependencies 
    opened by dependabot[bot] 1
  • Bump netty-all from 4.1.34.Final to 4.1.42.Final

    Bump netty-all from 4.1.34.Final to 4.1.42.Final

    Bumps netty-all from 4.1.34.Final to 4.1.42.Final.

    Commits
    • bd907c3 [maven-release-plugin] prepare release netty-4.1.42.Final
    • 2791f0f Avoid use of global AtomicLong for ScheduledFutureTask ids (#9599)
    • 86ff76a Fix incorrect comment (#9598)
    • 5e69a13 Cleanup JNI code to always correctly free memory when loading fails and also ...
    • eb3c4bd ChunkedNioFile can use absolute FileChannel::read to read chunks (#9592)
    • 76592db Close eventfd shutdown/wakeup race by closely tracking epoll edges (#9586)
    • 0a2d85f Fix GraalVM native image build error (#9593)
    • dc4de7f We need to use NewGloblRef when caching jclass instances (#9595)
    • 4499384 Update to netty-tcnative 2.0.26.Final (#9589)
    • 8648171 Fix *SslEngineTest to not throw ClassCastException and pass in all cases (#9588)
    • Additional commits viewable in compare view

    Dependabot compatibility score

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
    • @dependabot use these labels will set the current labels as the default for future PRs for this repo and language
    • @dependabot use these reviewers will set the current reviewers as the default for future PRs for this repo and language
    • @dependabot use these assignees will set the current assignees as the default for future PRs for this repo and language
    • @dependabot use this milestone will set the current milestone as the default for future PRs for this repo and language

    You can disable automated security fix PRs for this repo from the Security Alerts page.

    dependencies 
    opened by dependabot[bot] 1
  • Bump log4j-core from 2.17.0 to 2.17.1

    Bump log4j-core from 2.17.0 to 2.17.1

    Bumps log4j-core from 2.17.0 to 2.17.1.

    Dependabot compatibility score

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
    • @dependabot use these labels will set the current labels as the default for future PRs for this repo and language
    • @dependabot use these reviewers will set the current reviewers as the default for future PRs for this repo and language
    • @dependabot use these assignees will set the current assignees as the default for future PRs for this repo and language
    • @dependabot use this milestone will set the current milestone as the default for future PRs for this repo and language

    You can disable automated security fix PRs for this repo from the Security Alerts page.

    dependencies 
    opened by dependabot[bot] 0
  • Bump jackson-databind from 2.12.1 to 2.12.7.1

    Bump jackson-databind from 2.12.1 to 2.12.7.1

    Bumps jackson-databind from 2.12.1 to 2.12.7.1.

    Commits

    Dependabot compatibility score

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
    • @dependabot use these labels will set the current labels as the default for future PRs for this repo and language
    • @dependabot use these reviewers will set the current reviewers as the default for future PRs for this repo and language
    • @dependabot use these assignees will set the current assignees as the default for future PRs for this repo and language
    • @dependabot use this milestone will set the current milestone as the default for future PRs for this repo and language

    You can disable automated security fix PRs for this repo from the Security Alerts page.

    dependencies 
    opened by dependabot[bot] 0
  • Bump pulsar-broker from 2.8.1 to 2.8.4

    Bump pulsar-broker from 2.8.1 to 2.8.4

    Bumps pulsar-broker from 2.8.1 to 2.8.4.

    Release notes

    Sourced from pulsar-broker's releases.

    v2.8.3

    Important Notices

    • Fix detecting number of NICs in EC2 #14252. In the event that Pulsar cannot determine the NIC speed from the host, please set loadBalancerOverrideBrokerNicSpeedGbps.
    • Bump BookKeeper 4.14.3 12906
    • Add broker config isAllowAutoUpdateSchema 12786

    Security

    • Upgrade Postgres driver to 42.2.25 to get rid of CVE-2022-21724 14119
    • Get rid of CVEs in Solr connector 13822
    • Get rid of CVEs in InfluxDB connector 13821
    • Get rid of CVEs in batch-data-generator 13820
    • Get rid of CVEs brought in with aerospike 13819
    • [owasp] suppress false positive Avro CVE-2021-43045 13764
    • Upgrade protobuf to 3.16.1 to address CVE-2021-22569 13695
    • Upgrade Jackson to 2.12.6 13694
    • Upgrade Log4j to 2.17.1 to address CVE-2021-44832 13552
    • Cipher params not work in KeyStoreSSLContext 13322
    • [Broker] Remove tenant permission verification when list partitioned-topic 13138
    • Use JDK default security provider when Conscrypt isn't available 12938
    • [Authorization] Return if namespace policies are read only 12514

    Pulsar Admin

    • Make sure policies.is_allow_auto_update_schema not null 14409
    • pulsar admin exposes secret for source and sink 13059
    • Fix deleting tenants with active namespaces with 500. 13020
    • [function] pulsar admin exposes secrets for function 12950

    Bookkeeper

    • Upgrade BK to 4.14.4 and Grpc to 1.42.1 13714
    • Bump BookKeeper 4.14.3 12906

    Broker

    • Fix the wrong parameter in the log. 14309
    • Fix batch ack count is negative issue. 14288
    • bug fix: IllegalArgumentException: Invalid period 0.0 to calculate rate 14280
    • Clean up individually deleted messages before the mark-delete position 14261
    • If mark-delete operation fails, mark the cursor as "dirty" 14256
    • Fixed detecting number of NICs in EC2 14252
    • Remove log unacked msg. 14246
    • Change broker producer fence log level 14196
    • Fix NPE of cumulative ack mode and incorrect unack message count 14021
    • KeyShared stickyHashRange subscription: prevent stuck subscription in case of consumer restart 14014
    • Trim configuration value string which contains blank prefix or suffix 13984
    • waitingCursors potential heap memory leak 13939
    • Fix read schema compatibility strategy priority 13938
    • NPE when get isAllowAutoUploadSchema 13831
    • Fix call sync method in async rest API for internalGetSubscriptionsForNonPartitionedTopic 13745
    • Fix the deadlock while using zookeeper thread to create ledger 13744
    • Fix inefficient forEach loop 13742

    ... (truncated)

    Commits
    • 02ee561 Release 2.8.4
    • 9bc0115 Fix testProducerInvalidMessageMemoryRelease
    • c038898 Fix AuthenticationProviderBasicTest
    • c8c1c09 [improve][authentication] Adapt basic authentication configuration with prefi...
    • 6b3e46f Fix testProducerSemaphoreInvalidMessage by removing usages of mockStatic
    • 59339c4 [fix][client]Fix MaxQueueSize semaphore release leak in createOpSendMsg (#16915)
    • a501593 Forget to update memory usage when invalid message (#16835)
    • 7107657 Fix the compilation error when cherry-picking cdec98a
    • 05b16e2 [improve][test] Verify the authentication data in the authorization provider ...
    • acb4eba [improve][authentication] Improve get the basic authentication config (#16526)
    • Additional commits viewable in compare view

    Dependabot compatibility score

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
    • @dependabot use these labels will set the current labels as the default for future PRs for this repo and language
    • @dependabot use these reviewers will set the current reviewers as the default for future PRs for this repo and language
    • @dependabot use these assignees will set the current assignees as the default for future PRs for this repo and language
    • @dependabot use this milestone will set the current milestone as the default for future PRs for this repo and language

    You can disable automated security fix PRs for this repo from the Security Alerts page.

    dependencies 
    opened by dependabot[bot] 0
  • pulsar 2.10.1 无法启动

    pulsar 2.10.1 无法启动

    可以在 pulsar 2.7.1 启动,但是 2.10.1 或者 2.7.5 都无法正常启动 pulsar

    1. borker.conf
    #
    # Licensed to the Apache Software Foundation (ASF) under one
    # or more contributor license agreements.  See the NOTICE file
    # distributed with this work for additional information
    # regarding copyright ownership.  The ASF licenses this file
    # to you under the Apache License, Version 2.0 (the
    # "License"); you may not use this file except in compliance
    # with the License.  You may obtain a copy of the License at
    #
    #   http://www.apache.org/licenses/LICENSE-2.0
    #
    # Unless required by applicable law or agreed to in writing,
    # software distributed under the License is distributed on an
    # "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
    # KIND, either express or implied.  See the License for the
    # specific language governing permissions and limitations
    # under the License.
    #
    
    ### --- General broker settings --- ###
    
    # Zookeeper quorum connection string
    zookeeperServers=
    
    # Configuration Store connection string
    configurationStoreServers=
    
    brokerServicePort=6650
    
    # Port to use to server HTTP request
    webServicePort=8080
    
    # Hostname or IP address the service binds on, default is 0.0.0.0.
    bindAddress=0.0.0.0
    
    # Extra bind addresses for the service: <listener_name>:<scheme>://<host>:<port>,[...]
    bindAddresses=
    
    # Hostname or IP address the service advertises to the outside world. If not set, the value of InetAddress.getLocalHost().getHostName() is used.
    #advertisedAddress=
    
    # Enable or disable the HAProxy protocol.
    haProxyProtocolEnabled=false
    
    # Number of threads to use for Netty IO. Default is set to 2 * Runtime.getRuntime().availableProcessors()
    numIOThreads=
    
    # Number of threads to use for ordered executor. The ordered executor is used to operate with zookeeper,
    # such as init zookeeper client, get namespace policies from zookeeper etc. It also used to split bundle. Default is 8
    numOrderedExecutorThreads=8
    
    # Number of threads to use for HTTP requests processing. Default is set to 2 * Runtime.getRuntime().availableProcessors()
    numHttpServerThreads=
    
    # Number of thread pool size to use for pulsar broker service.
    # The executor in thread pool will do basic broker operation like load/unload bundle, update managedLedgerConfig,
    # update topic/subscription/replicator message dispatch rate, do leader election etc.
    # Default is Runtime.getRuntime().availableProcessors()
    numExecutorThreadPoolSize=
    
    # Number of thread pool size to use for pulsar zookeeper callback service
    # The cache executor thread pool is used for restarting global zookeeper session.
    # Default is 10
    numCacheExecutorThreadPoolSize=10
    
    # Max concurrent web requests
    maxConcurrentHttpRequests=1024
    
    # Name of the cluster to which this broker belongs to
    clusterName=standalone
    
    # Enable cluster's failure-domain which can distribute brokers into logical region
    failureDomainsEnabled=false
    
    # Metadata store session timeout in milliseconds
    metadataStoreSessionTimeoutMillis=30000
    
    # Metadata store operation timeout in seconds
    metadataStoreOperationTimeoutSeconds=30
    
    # Metadata store cache expiry time in seconds
    metadataStoreCacheExpirySeconds=300
    
    # Time to wait for broker graceful shutdown. After this time elapses, the process will be killed
    brokerShutdownTimeoutMs=60000
    
    # Flag to skip broker shutdown when broker handles Out of memory error
    skipBrokerShutdownOnOOM=false
    
    # Enable backlog quota check. Enforces action on topic when the quota is reached
    backlogQuotaCheckEnabled=true
    
    # How often to check for topics that have reached the quota
    backlogQuotaCheckIntervalInSeconds=60
    
    # Default per-topic backlog quota limit
    backlogQuotaDefaultLimitGB=10
    
    # Default per-topic backlog quota time limit in second, less than 0 means no limitation. default is -1.
    backlogQuotaDefaultLimitSecond=-1
    
    # Default ttl for namespaces if ttl is not already configured at namespace policies. (disable default-ttl with value 0)
    ttlDurationDefaultInSeconds=0
    
    # Enable the deletion of inactive topics. This parameter need to cooperate with the allowAutoTopicCreation parameter.
    # If brokerDeleteInactiveTopicsEnabled is set to true, we should ensure that allowAutoTopicCreation is also set to true.
    #brokerDeleteInactiveTopicsEnabled=true
    
    # How often to check for inactive topics
    brokerDeleteInactiveTopicsFrequencySeconds=60
    
    # Allow you to delete a tenant forcefully.
    forceDeleteTenantAllowed=false
    
    # Allow you to delete a namespace forcefully.
    forceDeleteNamespaceAllowed=false
    
    # Max pending publish requests per connection to avoid keeping large number of pending
    # requests in memory. Default: 1000
    maxPendingPublishRequestsPerConnection=1000
    
    # How frequently to proactively check and purge expired messages
    messageExpiryCheckIntervalInMinutes=5
    
    # Check between intervals to see if max message size in topic policies has been updated.
    # Default is 60s
    maxMessageSizeCheckIntervalInSeconds=60
    
    # How long to delay rewinding cursor and dispatching messages when active consumer is changed
    activeConsumerFailoverDelayTimeMillis=1000
    
    # How long to delete inactive subscriptions from last consuming
    # When it is 0, inactive subscriptions are not deleted automatically
    subscriptionExpirationTimeMinutes=0
    
    # Enable subscription message redelivery tracker to send redelivery count to consumer (default is enabled)
    subscriptionRedeliveryTrackerEnabled=true
    
    # On KeyShared subscriptions, with default AUTO_SPLIT mode, use splitting ranges or
    # consistent hashing to reassign keys to new consumers
    subscriptionKeySharedUseConsistentHashing=true
    
    # On KeyShared subscriptions, number of points in the consistent-hashing ring.
    # The higher the number, the more equal the assignment of keys to consumers
    subscriptionKeySharedConsistentHashingReplicaPoints=100
    
    # How frequently to proactively check and purge expired subscription
    subscriptionExpiryCheckIntervalInMinutes=5
    
    # Set the default behavior for message deduplication in the broker
    # This can be overridden per-namespace. If enabled, broker will reject
    # messages that were already stored in the topic
    brokerDeduplicationEnabled=false
    
    # Maximum number of producer information that it's going to be
    # persisted for deduplication purposes
    brokerDeduplicationMaxNumberOfProducers=10000
    
    # Number of entries after which a dedup info snapshot is taken.
    # A bigger interval will lead to less snapshots being taken though it would
    # increase the topic recovery time, when the entries published after the
    # snapshot need to be replayed
    brokerDeduplicationEntriesInterval=1000
    
    # Time of inactivity after which the broker will discard the deduplication information
    # relative to a disconnected producer. Default is 6 hours.
    brokerDeduplicationProducerInactivityTimeoutMinutes=360
    
    # When a namespace is created without specifying the number of bundle, this
    # value will be used as the default
    defaultNumberOfNamespaceBundles=4
    
    # Max number of topics allowed to be created in the namespace. When the topics reach the max topics of the namespace,
    # the broker should reject the new topic request(include topic auto-created by the producer or consumer)
    # until the number of connected consumers decrease.
    # Using a value of 0, is disabling maxTopicsPerNamespace-limit check.
    maxTopicsPerNamespace=0
    
    # Allow schema to be auto updated at broker level. User can override this by
    # 'is_allow_auto_update_schema' of namespace policy.
    isAllowAutoUpdateSchemaEnabled=true
    
    # Enable check for minimum allowed client library version
    clientLibraryVersionCheckEnabled=false
    
    # Path for the file used to determine the rotation status for the broker when responding
    # to service discovery health checks
    statusFilePath=/usr/local/apache/htdocs
    
    # Max number of unacknowledged messages allowed to receive messages by a consumer on a shared subscription. Broker will stop sending
    # messages to consumer once, this limit reaches until consumer starts acknowledging messages back
    # Using a value of 0, is disabling unackeMessage limit check and consumer can receive messages without any restriction
    maxUnackedMessagesPerConsumer=50000
    
    # Max number of unacknowledged messages allowed per shared subscription. Broker will stop dispatching messages to
    # all consumers of the subscription once this limit reaches until consumer starts acknowledging messages back and
    # unack count reaches to limit/2. Using a value of 0, is disabling unackedMessage-limit
    # check and dispatcher can dispatch messages without any restriction
    maxUnackedMessagesPerSubscription=200000
    
    # Max number of unacknowledged messages allowed per broker. Once this limit reaches, broker will stop dispatching
    # messages to all shared subscription which has higher number of unack messages until subscriptions start
    # acknowledging messages back and unack count reaches to limit/2. Using a value of 0, is disabling
    # unackedMessage-limit check and broker doesn't block dispatchers
    maxUnackedMessagesPerBroker=0
    
    # Once broker reaches maxUnackedMessagesPerBroker limit, it blocks subscriptions which has higher unacked messages
    # than this percentage limit and subscription will not receive any new messages until that subscription acks back
    # limit/2 messages
    maxUnackedMessagesPerSubscriptionOnBrokerBlocked=0.16
    
    # Tick time to schedule task that checks topic publish rate limiting across all topics
    # Reducing to lower value can give more accuracy while throttling publish but
    # it uses more CPU to perform frequent check. (Disable publish throttling with value 0)
    topicPublisherThrottlingTickTimeMillis=2
    
    # Enable precise rate limit for topic publish
    preciseTopicPublishRateLimiterEnable=false
    
    # Tick time to schedule task that checks broker publish rate limiting across all topics
    # Reducing to lower value can give more accuracy while throttling publish but
    # it uses more CPU to perform frequent check. (Disable publish throttling with value 0)
    brokerPublisherThrottlingTickTimeMillis=50
    
    # Max Rate(in 1 seconds) of Message allowed to publish for a broker if broker publish rate limiting enabled
    # (Disable message rate limit with value 0)
    brokerPublisherThrottlingMaxMessageRate=0
    
    # Max Rate(in 1 seconds) of Byte allowed to publish for a broker if broker publish rate limiting enabled
    # (Disable byte rate limit with value 0)
    brokerPublisherThrottlingMaxByteRate=0
    
    # Default messages per second dispatch throttling-limit for every topic. Using a value of 0, is disabling default
    # message dispatch-throttling
    dispatchThrottlingRatePerTopicInMsg=0
    
    # Default bytes per second dispatch throttling-limit for every topic. Using a value of 0, is disabling
    # default message-byte dispatch-throttling
    dispatchThrottlingRatePerTopicInByte=0
    
    # Apply dispatch rate limiting on batch message instead individual
    # messages with in batch message. (Default is disabled)
    dispatchThrottlingOnBatchMessageEnabled=false
    
    # Dispatch rate-limiting relative to publish rate.
    # (Enabling flag will make broker to dynamically update dispatch-rate relatively to publish-rate:
    # throttle-dispatch-rate = (publish-rate + configured dispatch-rate).
    dispatchThrottlingRateRelativeToPublishRate=false
    
    # By default we enable dispatch-throttling for both caught up consumers as well as consumers who have
    # backlog.
    dispatchThrottlingOnNonBacklogConsumerEnabled=true
    
    # The read failure backoff initial time in milliseconds. By default it is 15s.
    dispatcherReadFailureBackoffInitialTimeInMs=15000
    
    # The read failure backoff max time in milliseconds. By default it is 60s.
    dispatcherReadFailureBackoffMaxTimeInMs=60000
    
    # The read failure backoff mandatory stop time in milliseconds. By default it is 0s.
    dispatcherReadFailureBackoffMandatoryStopTimeInMs=0
    
    # Precise dispathcer flow control according to history message number of each entry
    preciseDispatcherFlowControl=false
    
    # Max number of concurrent lookup request broker allows to throttle heavy incoming lookup traffic
    maxConcurrentLookupRequest=50000
    
    # Max number of concurrent topic loading request broker allows to control number of zk-operations
    maxConcurrentTopicLoadRequest=5000
    
    # Max concurrent non-persistent message can be processed per connection
    maxConcurrentNonPersistentMessagePerConnection=1000
    
    # Number of worker threads to serve non-persistent topic
    numWorkerThreadsForNonPersistentTopic=8
    
    # Enable broker to load persistent topics
    enablePersistentTopics=true
    
    # Enable broker to load non-persistent topics
    enableNonPersistentTopics=true
    
    # Max number of producers allowed to connect to topic. Once this limit reaches, Broker will reject new producers
    # until the number of connected producers decrease.
    # Using a value of 0, is disabling maxProducersPerTopic-limit check.
    maxProducersPerTopic=0
    
    # Max number of producers with the same IP address allowed to connect to topic.
    # Once this limit reaches, Broker will reject new producers until the number of
    # connected producers with the same IP address decrease.
    # Using a value of 0, is disabling maxSameAddressProducersPerTopic-limit check.
    maxSameAddressProducersPerTopic=0
    
    # Enforce producer to publish encrypted messages.(default disable).
    encryptionRequireOnProducer=false
    
    # Max number of consumers allowed to connect to topic. Once this limit reaches, Broker will reject new consumers
    # until the number of connected consumers decrease.
    # Using a value of 0, is disabling maxConsumersPerTopic-limit check.
    maxConsumersPerTopic=0
    
    # Max number of consumers with the same IP address allowed to connect to topic.
    # Once this limit reaches, Broker will reject new consumers until the number of
    # connected consumers with the same IP address decrease.
    # Using a value of 0, is disabling maxSameAddressConsumersPerTopic-limit check.
    maxSameAddressConsumersPerTopic=0
    
    # Max number of subscriptions allowed to subscribe to topic. Once this limit reaches, broker will reject
    # new subscription until the number of subscribed subscriptions decrease.
    # Using a value of 0, is disabling maxSubscriptionsPerTopic limit check.
    maxSubscriptionsPerTopic=0
    
    # Max number of consumers allowed to connect to subscription. Once this limit reaches, Broker will reject new consumers
    # until the number of connected consumers decrease.
    # Using a value of 0, is disabling maxConsumersPerSubscription-limit check.
    maxConsumersPerSubscription=0
    
    # Max number of partitions per partitioned topic
    # Use 0 or negative number to disable the check
    maxNumPartitionsPerPartitionedTopic=0
    
    ### --- Metadata Store --- ###
    
    # Whether we should enable metadata operations batching
    metadataStoreBatchingEnabled=true
    
    # Maximum delay to impose on batching grouping
    metadataStoreBatchingMaxDelayMillis=5
    
    # Maximum number of operations to include in a singular batch
    metadataStoreBatchingMaxOperations=1000
    
    # Maximum size of a batch
    metadataStoreBatchingMaxSizeKb=128
    
    ### --- TLS --- ###
    # Deprecated - Use webServicePortTls and brokerServicePortTls instead
    tlsEnabled=false
    
    # Tls cert refresh duration in seconds (set 0 to check on every new connection)
    tlsCertRefreshCheckDurationSec=300
    
    # Path for the TLS certificate file
    tlsCertificateFilePath=
    
    # Path for the TLS private key file
    tlsKeyFilePath=
    
    # Path for the trusted TLS certificate file.
    # This cert is used to verify that any certs presented by connecting clients
    # are signed by a certificate authority. If this verification
    # fails, then the certs are untrusted and the connections are dropped.
    tlsTrustCertsFilePath=
    
    # Accept untrusted TLS certificate from client.
    # If true, a client with a cert which cannot be verified with the
    # 'tlsTrustCertsFilePath' cert will allowed to connect to the server,
    # though the cert will not be used for client authentication.
    tlsAllowInsecureConnection=false
    
    # Specify the tls protocols the broker will use to negotiate during TLS handshake
    # (a comma-separated list of protocol names).
    # Examples:- [TLSv1.3, TLSv1.2]
    tlsProtocols=
    
    # Specify the tls cipher the broker will use to negotiate during TLS Handshake
    # (a comma-separated list of ciphers).
    # Examples:- [TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256]
    tlsCiphers=
    
    # Trusted client certificates are required for to connect TLS
    # Reject the Connection if the Client Certificate is not trusted.
    # In effect, this requires that all connecting clients perform TLS client
    # authentication.
    tlsRequireTrustedClientCertOnConnect=false
    
    # Specify the TLS provider for the broker service:
    # When using TLS authentication with CACert, the valid value is either OPENSSL or JDK.
    # When using TLS authentication with KeyStore, available values can be SunJSSE, Conscrypt and etc.
    tlsProvider=
    
    # Specify the TLS provider for the web service: SunJSSE, Conscrypt and etc.
    webServiceTlsProvider=Conscrypt
    
    ### --- KeyStore TLS config variables --- ###
    # Enable TLS with KeyStore type configuration in broker.
    tlsEnabledWithKeyStore=false
    
    # TLS KeyStore type configuration in broker: JKS, PKCS12
    tlsKeyStoreType=JKS
    
    # TLS KeyStore path in broker
    tlsKeyStore=
    
    # TLS KeyStore password for broker
    tlsKeyStorePassword=
    
    # TLS TrustStore type configuration in broker: JKS, PKCS12
    tlsTrustStoreType=JKS
    
    # TLS TrustStore path in broker
    tlsTrustStore=
    
    # TLS TrustStore password for broker
    tlsTrustStorePassword=
    
    # Whether internal client use KeyStore type to authenticate with Pulsar brokers
    brokerClientTlsEnabledWithKeyStore=false
    
    # The TLS Provider used by internal client to authenticate with other Pulsar brokers
    brokerClientSslProvider=
    
    # TLS TrustStore type configuration for internal client: JKS, PKCS12
    # used by the internal client to authenticate with Pulsar brokers
    brokerClientTlsTrustStoreType=JKS
    
    # TLS TrustStore path for internal client
    # used by the internal client to authenticate with Pulsar brokers
    brokerClientTlsTrustStore=
    
    # TLS TrustStore password for internal client,
    # used by the internal client to authenticate with Pulsar brokers
    brokerClientTlsTrustStorePassword=
    
    # Specify the tls cipher the internal client will use to negotiate during TLS Handshake
    # (a comma-separated list of ciphers)
    # e.g.  [TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256].
    # used by the internal client to authenticate with Pulsar brokers
    brokerClientTlsCiphers=
    
    # Specify the tls protocols the broker will use to negotiate during TLS handshake
    # (a comma-separated list of protocol names).
    # e.g.  [TLSv1.3, TLSv1.2]
    # used by the internal client to authenticate with Pulsar brokers
    brokerClientTlsProtocols=
    
    # Enable or disable system topic
    systemTopicEnabled=false
    
    # The schema compatibility strategy is used for system topics.
    # Available values: ALWAYS_INCOMPATIBLE, ALWAYS_COMPATIBLE, BACKWARD, FORWARD, FULL, BACKWARD_TRANSITIVE, FORWARD_TRANSITIVE, FULL_TRANSITIVE
    systemTopicSchemaCompatibilityStrategy=ALWAYS_COMPATIBLE
    
    # Enable or disable topic level policies, topic level policies depends on the system topic
    # Please enable the system topic first.
    topicLevelPoliciesEnabled=false
    
    # If a topic remains fenced for this number of seconds, it will be closed forcefully.
    # If it is set to 0 or a negative number, the fenced topic will not be closed.
    topicFencingTimeoutSeconds=0
    
    ### --- Authentication --- ###
    # Role names that are treated as "proxy roles". If the broker sees a request with
    #role as proxyRoles - it will demand to see a valid original principal.
    proxyRoles=
    
    # If this flag is set then the broker authenticates the original Auth data
    # else it just accepts the originalPrincipal and authorizes it (if required).
    authenticateOriginalAuthData=false
    
    # Enable authentication
    authenticationEnabled=false
    
    # Authentication provider name list, which is comma separated list of class names
    authenticationProviders=
    
    # Enforce authorization
    authorizationEnabled=false
    
    # Authorization provider fully qualified class-name
    authorizationProvider=org.apache.pulsar.broker.authorization.PulsarAuthorizationProvider
    
    # Allow wildcard matching in authorization
    # (wildcard matching only applicable if wildcard-char:
    # * presents at first or last position eg: *.pulsar.service, pulsar.service.*)
    authorizationAllowWildcardsMatching=false
    
    # Role names that are treated as "super-user", meaning they will be able to do all admin
    # operations and publish/consume from all topics
    superUserRoles=
    
    # Authentication settings of the broker itself. Used when the broker connects to other brokers,
    # either in same or other clusters
    brokerClientAuthenticationPlugin=
    brokerClientAuthenticationParameters=
    
    # Supported Athenz provider domain names(comma separated) for authentication
    athenzDomainNames=
    
    # When this parameter is not empty, unauthenticated users perform as anonymousUserRole
    anonymousUserRole=
    
    
    ### --- Token Authentication Provider --- ###
    
    ## Symmetric key
    # Configure the secret key to be used to validate auth tokens
    # The key can be specified like:
    # tokenSecretKey=data:;base64,xxxxxxxxx
    # tokenSecretKey=file:///my/secret.key  ( Note: key file must be DER-encoded )
    tokenSecretKey=
    
    ## Asymmetric public/private key pair
    # Configure the public key to be used to validate auth tokens
    # The key can be specified like:
    # tokenPublicKey=data:;base64,xxxxxxxxx
    # tokenPublicKey=file:///my/public.key    ( Note: key file must be DER-encoded )
    tokenPublicKey=
    
    
    # The token "claim" that will be interpreted as the authentication "role" or "principal" by AuthenticationProviderToken (defaults to "sub" if blank)
    tokenAuthClaim=
    
    # The token audience "claim" name, e.g. "aud", that will be used to get the audience from token.
    # If not set, audience will not be verified.
    tokenAudienceClaim=
    
    # The token audience stands for this broker. The field `tokenAudienceClaim` of a valid token, need contains this.
    tokenAudience=
    
    ### --- BookKeeper Client --- ###
    
    # Authentication plugin to use when connecting to bookies
    bookkeeperClientAuthenticationPlugin=
    
    # BookKeeper auth plugin implementatation specifics parameters name and values
    bookkeeperClientAuthenticationParametersName=
    bookkeeperClientAuthenticationParameters=
    
    # Timeout for BK add / read operations
    bookkeeperClientTimeoutInSeconds=30
    
    # Number of BookKeeper client worker threads
    # Default is Runtime.getRuntime().availableProcessors()
    bookkeeperClientNumWorkerThreads=
    
    # Speculative reads are initiated if a read request doesn't complete within a certain time
    # Using a value of 0, is disabling the speculative reads
    bookkeeperClientSpeculativeReadTimeoutInMillis=0
    
    # Number of channels per bookie
    bookkeeperNumberOfChannelsPerBookie=16
    
    # Enable bookies health check. Bookies that have more than the configured number of failure within
    # the interval will be quarantined for some time. During this period, new ledgers won't be created
    # on these bookies
    bookkeeperClientHealthCheckEnabled=true
    bookkeeperClientHealthCheckIntervalSeconds=60
    bookkeeperClientHealthCheckErrorThresholdPerInterval=5
    bookkeeperClientHealthCheckQuarantineTimeInSeconds=1800
    
    #bookie quarantine ratio to avoid all clients quarantine the high pressure bookie servers at the same time
    bookkeeperClientQuarantineRatio=1.0
    
    # Enable rack-aware bookie selection policy. BK will chose bookies from different racks when
    # forming a new bookie ensemble
    # This parameter related to ensemblePlacementPolicy in conf/bookkeeper.conf, if enabled, ensemblePlacementPolicy
    # should be set to org.apache.bookkeeper.client.RackawareEnsemblePlacementPolicy
    bookkeeperClientRackawarePolicyEnabled=true
    
    # Enable region-aware bookie selection policy. BK will chose bookies from
    # different regions and racks when forming a new bookie ensemble.
    # If enabled, the value of bookkeeperClientRackawarePolicyEnabled is ignored
    # This parameter related to ensemblePlacementPolicy in conf/bookkeeper.conf, if enabled, ensemblePlacementPolicy
    # should be set to org.apache.bookkeeper.client.RegionAwareEnsemblePlacementPolicy
    bookkeeperClientRegionawarePolicyEnabled=false
    
    # Minimum number of racks per write quorum. BK rack-aware bookie selection policy will try to
    # get bookies from at least 'bookkeeperClientMinNumRacksPerWriteQuorum' racks for a write quorum.
    bookkeeperClientMinNumRacksPerWriteQuorum=1
    
    # Enforces rack-aware bookie selection policy to pick bookies from 'bookkeeperClientMinNumRacksPerWriteQuorum'
    # racks for a writeQuorum.
    # If BK can't find bookie then it would throw BKNotEnoughBookiesException instead of picking random one.
    bookkeeperClientEnforceMinNumRacksPerWriteQuorum=false
    
    # Enable/disable reordering read sequence on reading entries.
    bookkeeperClientReorderReadSequenceEnabled=false
    
    # Enable bookie isolation by specifying a list of bookie groups to choose from. Any bookie
    # outside the specified groups will not be used by the broker
    bookkeeperClientIsolationGroups=
    
    # Enable bookie secondary-isolation group if bookkeeperClientIsolationGroups doesn't
    # have enough bookie available.
    bookkeeperClientSecondaryIsolationGroups=
    
    # Minimum bookies that should be available as part of bookkeeperClientIsolationGroups
    # else broker will include bookkeeperClientSecondaryIsolationGroups bookies in isolated list.
    bookkeeperClientMinAvailableBookiesInIsolationGroups=
    
    # Set the client security provider factory class name.
    # Default: org.apache.bookkeeper.tls.TLSContextFactory
    bookkeeperTLSProviderFactoryClass=org.apache.bookkeeper.tls.TLSContextFactory
    
    # Enable tls authentication with bookie
    bookkeeperTLSClientAuthentication=false
    
    # Supported type: PEM, JKS, PKCS12. Default value: PEM
    bookkeeperTLSKeyFileType=PEM
    
    #Supported type: PEM, JKS, PKCS12. Default value: PEM
    bookkeeperTLSTrustCertTypes=PEM
    
    # Path to file containing keystore password, if the client keystore is password protected.
    bookkeeperTLSKeyStorePasswordPath=
    
    # Path to file containing truststore password, if the client truststore is password protected.
    bookkeeperTLSTrustStorePasswordPath=
    
    # Path for the TLS private key file
    bookkeeperTLSKeyFilePath=
    
    # Path for the TLS certificate file
    bookkeeperTLSCertificateFilePath=
    
    # Path for the trusted TLS certificate file
    bookkeeperTLSTrustCertsFilePath=
    
    # Enable/disable disk weight based placement. Default is false
    bookkeeperDiskWeightBasedPlacementEnabled=false
    
    # Set the interval to check the need for sending an explicit LAC
    # A value of '0' disables sending any explicit LACs. Default is 0.
    bookkeeperExplicitLacIntervalInMills=0
    
    # Use older Bookkeeper wire protocol with bookie
    bookkeeperUseV2WireProtocol=true
    
    # Expose bookkeeper client managed ledger stats to prometheus. default is false
    # bookkeeperClientExposeStatsToPrometheus=false
    
    ### --- Managed Ledger --- ###
    
    # Number of bookies to use when creating a ledger
    managedLedgerDefaultEnsembleSize=1
    
    # Number of copies to store for each message
    managedLedgerDefaultWriteQuorum=1
    
    # Number of guaranteed copies (acks to wait before write is complete)
    managedLedgerDefaultAckQuorum=1
    
    # How frequently to flush the cursor positions that were accumulated due to rate limiting. (seconds).
    # Default is 60 seconds
    managedLedgerCursorPositionFlushSeconds=60
    
    # Default type of checksum to use when writing to BookKeeper. Default is "CRC32C"
    # Other possible options are "CRC32", "MAC" or "DUMMY" (no checksum).
    managedLedgerDigestType=CRC32C
    
    # Number of threads to be used for managed ledger tasks dispatching
    managedLedgerNumWorkerThreads=4
    
    # Number of threads to be used for managed ledger scheduled tasks
    managedLedgerNumSchedulerThreads=4
    
    # Amount of memory to use for caching data payload in managed ledger. This memory
    # is allocated from JVM direct memory and it's shared across all the topics
    # running  in the same broker. By default, uses 1/5th of available direct memory
    managedLedgerCacheSizeMB=
    
    # Whether we should make a copy of the entry payloads when inserting in cache
    managedLedgerCacheCopyEntries=false
    
    # Threshold to which bring down the cache level when eviction is triggered
    managedLedgerCacheEvictionWatermark=0.9
    
    # Configure the cache eviction frequency for the managed ledger cache (evictions/sec)
    managedLedgerCacheEvictionFrequency=100.0
    
    # All entries that have stayed in cache for more than the configured time, will be evicted
    managedLedgerCacheEvictionTimeThresholdMillis=1000
    
    # Configure the threshold (in number of entries) from where a cursor should be considered 'backlogged'
    # and thus should be set as inactive.
    managedLedgerCursorBackloggedThreshold=1000
    
    # Rate limit the amount of writes generated by consumer acking the messages
    managedLedgerDefaultMarkDeleteRateLimit=0.1
    
    # Max number of entries to append to a ledger before triggering a rollover
    # A ledger rollover is triggered after the min rollover time has passed
    # and one of the following conditions is true:
    #  * The max rollover time has been reached
    #  * The max entries have been written to the ledger
    #  * The max ledger size has been written to the ledger
    managedLedgerMaxEntriesPerLedger=50000
    
    # Minimum time between ledger rollover for a topic
    managedLedgerMinLedgerRolloverTimeMinutes=10
    
    # Maximum time before forcing a ledger rollover for a topic
    managedLedgerMaxLedgerRolloverTimeMinutes=240
    
    # Max number of entries to append to a cursor ledger
    managedLedgerCursorMaxEntriesPerLedger=50000
    
    # Max time before triggering a rollover on a cursor ledger
    managedLedgerCursorRolloverTimeInSeconds=14400
    
    # Maximum ledger size before triggering a rollover for a topic (MB)
    managedLedgerMaxSizePerLedgerMbytes=2048
    
    # Max number of "acknowledgment holes" that are going to be persistently stored.
    # When acknowledging out of order, a consumer will leave holes that are supposed
    # to be quickly filled by acking all the messages. The information of which
    # messages are acknowledged is persisted by compressing in "ranges" of messages
    # that were acknowledged. After the max number of ranges is reached, the information
    # will only be tracked in memory and messages will be redelivered in case of
    # crashes.
    managedLedgerMaxUnackedRangesToPersist=10000
    
    # Max number of "acknowledgment holes" that can be stored in Zookeeper. If number of unack message range is higher
    # than this limit then broker will persist unacked ranges into bookkeeper to avoid additional data overhead into
    # zookeeper.
    managedLedgerMaxUnackedRangesToPersistInZooKeeper=1000
    
    # Skip reading non-recoverable/unreadable data-ledger under managed-ledger's list. It helps when data-ledgers gets
    # corrupted at bookkeeper and managed-cursor is stuck at that ledger.
    autoSkipNonRecoverableData=false
    
    # operation timeout while updating managed-ledger metadata.
    managedLedgerMetadataOperationsTimeoutSeconds=60
    
    # Read entries timeout when broker tries to read messages from bookkeeper.
    managedLedgerReadEntryTimeoutSeconds=0
    
    # Add entry timeout when broker tries to publish message to bookkeeper (0 to disable it).
    managedLedgerAddEntryTimeoutSeconds=0
    
    # New entries check delay for the cursor under the managed ledger.
    # If no new messages in the topic, the cursor will try to check again after the delay time.
    # For consumption latency sensitive scenario, can set to a smaller value or set to 0.
    # Of course, use a smaller value may degrade consumption throughput. Default is 10ms.
    managedLedgerNewEntriesCheckDelayInMillis=10
    
    # Use Open Range-Set to cache unacked messages
    managedLedgerUnackedRangesOpenCacheSetEnabled=true
    
    # Managed ledger prometheus stats latency rollover seconds (default: 60s)
    managedLedgerPrometheusStatsLatencyRolloverSeconds=60
    
    # Whether trace managed ledger task execution time
    managedLedgerTraceTaskExecution=true
    
    # If you want to custom bookie ID or use a dynamic network address for the bookie,
    # you can set this option.
    # Bookie advertises itself using bookieId rather than
    # BookieSocketAddress (hostname:port or IP:port).
    # bookieId is a non empty string that can contain ASCII digits and letters ([a-zA-Z9-0]),
    # colons, dashes, and dots.
    # For more information about bookieId, see http://bookkeeper.apache.org/bps/BP-41-bookieid/.
    # bookieId=
    
    ### --- Load balancer --- ###
    
    #loadManagerClassName=org.apache.pulsar.broker.loadbalance.NoopLoadManager
    
    # Enable load balancer
    loadBalancerEnabled=false
    
    # Percentage of change to trigger load report update
    loadBalancerReportUpdateThresholdPercentage=10
    
    # maximum interval to update load report
    loadBalancerReportUpdateMaxIntervalMinutes=15
    
    # Frequency of report to collect
    loadBalancerHostUsageCheckIntervalMinutes=1
    
    # Load shedding interval. Broker periodically checks whether some traffic should be offload from
    # some over-loaded broker to other under-loaded brokers
    loadBalancerSheddingIntervalMinutes=1
    
    # Prevent the same topics to be shed and moved to other broker more than once within this timeframe
    loadBalancerSheddingGracePeriodMinutes=30
    
    # Usage threshold to allocate max number of topics to broker
    loadBalancerBrokerMaxTopics=50000
    
    # Interval to flush dynamic resource quota to ZooKeeper
    loadBalancerResourceQuotaUpdateIntervalMinutes=15
    
    # enable/disable namespace bundle auto split
    loadBalancerAutoBundleSplitEnabled=true
    
    # enable/disable automatic unloading of split bundles
    loadBalancerAutoUnloadSplitBundlesEnabled=true
    
    # maximum topics in a bundle, otherwise bundle split will be triggered
    loadBalancerNamespaceBundleMaxTopics=1000
    
    # maximum sessions (producers + consumers) in a bundle, otherwise bundle split will be triggered
    loadBalancerNamespaceBundleMaxSessions=1000
    
    # maximum msgRate (in + out) in a bundle, otherwise bundle split will be triggered
    loadBalancerNamespaceBundleMaxMsgRate=30000
    
    # maximum bandwidth (in + out) in a bundle, otherwise bundle split will be triggered
    loadBalancerNamespaceBundleMaxBandwidthMbytes=100
    
    # maximum number of bundles in a namespace
    loadBalancerNamespaceMaximumBundles=128
    
    # The broker resource usage threshold.
    # When the broker resource usage is greater than the pulsar cluster average resource usage,
    # the threshold shedder will be triggered to offload bundles from the broker.
    # It only takes effect in the ThresholdShedder strategy.
    loadBalancerBrokerThresholdShedderPercentage=10
    
    # When calculating new resource usage, the history usage accounts for.
    # It only takes effect in the ThresholdShedder strategy.
    loadBalancerHistoryResourcePercentage=0.9
    
    # The BandWithIn usage weight when calculating new resource usage.
    # It only takes effect in the ThresholdShedder strategy.
    loadBalancerBandwithInResourceWeight=1.0
    
    # The BandWithOut usage weight when calculating new resource usage.
    # It only takes effect in the ThresholdShedder strategy.
    loadBalancerBandwithOutResourceWeight=1.0
    
    # The CPU usage weight when calculating new resource usage.
    # It only takes effect in the ThresholdShedder strategy.
    loadBalancerCPUResourceWeight=1.0
    
    # The heap memory usage weight when calculating new resource usage.
    # It only takes effect in the ThresholdShedder strategy.
    loadBalancerMemoryResourceWeight=1.0
    
    # The direct memory usage weight when calculating new resource usage.
    # It only takes effect in the ThresholdShedder strategy.
    loadBalancerDirectMemoryResourceWeight=1.0
    
    # Bundle unload minimum throughput threshold (MB), avoiding bundle unload frequently.
    # It only takes effect in the ThresholdShedder strategy.
    loadBalancerBundleUnloadMinThroughputThreshold=10
    
    # Time to wait for the unloading of a namespace bundle
    namespaceBundleUnloadingTimeoutMs=60000
    
    ### --- Replication --- ###
    
    # Enable replication metrics
    replicationMetricsEnabled=true
    
    # Max number of connections to open for each broker in a remote cluster
    # More connections host-to-host lead to better throughput over high-latency
    # links.
    replicationConnectionsPerBroker=16
    
    # Replicator producer queue size
    replicationProducerQueueSize=1000
    
    # Duration to check replication policy to avoid replicator inconsistency
    # due to missing ZooKeeper watch (disable with value 0)
    replicationPolicyCheckDurationSeconds=600
    
    # Default message retention time
    defaultRetentionTimeInMinutes=0
    
    # Default retention size
    defaultRetentionSizeInMB=0
    
    # How often to check whether the connections are still alive
    keepAliveIntervalSeconds=30
    
    ### --- WebSocket --- ###
    
    # Enable the WebSocket API service in broker
    webSocketServiceEnabled=true
    
    # Number of IO threads in Pulsar Client used in WebSocket proxy
    webSocketNumIoThreads=8
    
    # Number of connections per Broker in Pulsar Client used in WebSocket proxy
    webSocketConnectionsPerBroker=8
    
    # Time in milliseconds that idle WebSocket session times out
    webSocketSessionIdleTimeoutMillis=300000
    
    # The maximum size of a text message during parsing in WebSocket proxy
    webSocketMaxTextFrameSize=1048576
    
    ### --- Metrics --- ###
    
    # Enable topic level metrics
    exposeTopicLevelMetricsInPrometheus=true
    
    # Time in milliseconds that metrics endpoint would time out. Default is 30s.
    # Increase it if there are a lot of topics to expose topic-level metrics.
    # Set it to 0 to disable timeout.
    metricsServletTimeoutMs=30000
    
    # Classname of Pluggable JVM GC metrics logger that can log GC specific metrics
    # jvmGCMetricsLoggerClassName=
    
    ### --- Broker Web Stats --- ###
    
    # Enable topic level metrics
    exposePublisherStats=true
    
    # Enable expose the precise backlog stats.
    # Set false to use published counter and consumed counter to calculate, this would be more efficient but may be inaccurate.
    # Default is false.
    exposePreciseBacklogInPrometheus=false
    
    # Enable splitting topic and partition label in Prometheus.
    # If enabled, a topic name will split into 2 parts, one is topic name without partition index,
    # another one is partition index, e.g. (topic=xxx, partition=0).
    # If the topic is a non-partitioned topic, -1 will be used for the partition index.
    # If disabled, one label to represent the topic and partition, e.g. (topic=xxx-partition-0)
    # Default is false.
    
    splitTopicAndPartitionLabelInPrometheus=false
    
    # If true, aggregate publisher stats of PartitionedTopicStats by producerName.
    # Otherwise, aggregate it by list index.
    aggregatePublisherStatsByProducerName=false
    
    ### --- Schema storage --- ###
    # The schema storage implementation used by this broker.
    schemaRegistryStorageClassName=org.apache.pulsar.broker.service.schema.BookkeeperSchemaStorageFactory
    
    # Whether to enable schema validation.
    # When schema validation is enabled, if a producer without a schema attempts to produce the message to a topic with schema, the producer is rejected and disconnected.
    isSchemaValidationEnforced=false
    
    # The schema compatibility strategy at broker level.
    # Available values: ALWAYS_INCOMPATIBLE, ALWAYS_COMPATIBLE, BACKWARD, FORWARD, FULL, BACKWARD_TRANSITIVE, FORWARD_TRANSITIVE, FULL_TRANSITIVE
    schemaCompatibilityStrategy=FULL
    
    ### --- Deprecated config variables --- ###
    
    # Deprecated. Use configurationStoreServers
    globalZookeeperServers=
    
    # Deprecated. Use brokerDeleteInactiveTopicsFrequencySeconds
    brokerServicePurgeInactiveFrequencyInSeconds=60
    
    ### --- BookKeeper Configuration --- #####
    
    ledgerStorageClass=org.apache.bookkeeper.bookie.storage.ldb.DbLedgerStorage
    
    # The maximum netty frame size in bytes. Any message received larger than this will be rejected. The default value is 5MB.
    nettyMaxFrameSizeBytes=5253120
    
    # Size of Write Cache. Memory is allocated from JVM direct memory.
    # Write cache is used to buffer entries before flushing into the entry log
    # For good performance, it should be big enough to hold a substantial amount
    # of entries in the flush interval
    # By default it will be allocated to 1/4th of the available direct memory
    dbStorage_writeCacheMaxSizeMb=
    
    # Size of Read cache. Memory is allocated from JVM direct memory.
    # This read cache is pre-filled doing read-ahead whenever a cache miss happens
    # By default it will be allocated to 1/4th of the available direct memory
    dbStorage_readAheadCacheMaxSizeMb=
    
    # How many entries to pre-fill in cache after a read cache miss
    dbStorage_readAheadCacheBatchSize=1000
    
    flushInterval=60000
    
    ## RocksDB specific configurations
    ## DbLedgerStorage uses RocksDB to store the indexes from
    ## (ledgerId, entryId) -> (entryLog, offset)
    
    # Size of RocksDB block-cache. For best performance, this cache
    # should be big enough to hold a significant portion of the index
    # database which can reach ~2GB in some cases
    # Default is to use 10% of the direct memory size
    dbStorage_rocksDB_blockCacheSize=
    
    # Other RocksDB specific tunables
    dbStorage_rocksDB_writeBufferSizeMB=4
    dbStorage_rocksDB_sstSizeInMB=4
    dbStorage_rocksDB_blockSize=4096
    dbStorage_rocksDB_bloomFilterBitsPerKey=10
    dbStorage_rocksDB_numLevels=-1
    dbStorage_rocksDB_numFilesInLevel0=4
    dbStorage_rocksDB_maxSizeInLevel1MB=256
    
    # Maximum latency to impose on a journal write to achieve grouping
    journalMaxGroupWaitMSec=1
    
    # Should the data be fsynced on journal before acknowledgment.
    journalSyncData=false
    
    
    # For each ledger dir, maximum disk space which can be used.
    # Default is 0.95f. i.e. 95% of disk can be used at most after which nothing will
    # be written to that partition. If all ledger dir partions are full, then bookie
    # will turn to readonly mode if 'readOnlyModeEnabled=true' is set, else it will
    # shutdown.
    # Valid values should be in between 0 and 1 (exclusive).
    diskUsageThreshold=0.99
    
    # The disk free space low water mark threshold.
    # Disk is considered full when usage threshold is exceeded.
    # Disk returns back to non-full state when usage is below low water mark threshold.
    # This prevents it from going back and forth between these states frequently
    # when concurrent writes and compaction are happening. This also prevent bookie from
    # switching frequently between read-only and read-writes states in the same cases.
    diskUsageWarnThreshold=0.99
    
    # Whether the bookie allowed to use a loopback interface as its primary
    # interface(i.e. the interface it uses to establish its identity)?
    # By default, loopback interfaces are not allowed as the primary
    # interface.
    # Using a loopback interface as the primary interface usually indicates
    # a configuration error. For example, its fairly common in some VPS setups
    # to not configure a hostname, or to have the hostname resolve to
    # 127.0.0.1. If this is the case, then all bookies in the cluster will
    # establish their identities as 127.0.0.1:3181, and only one will be able
    # to join the cluster. For VPSs configured like this, you should explicitly
    # set the listening interface.
    allowLoopback=true
    
    # How long the interval to trigger next garbage collection, in milliseconds
    # Since garbage collection is running in background, too frequent gc
    # will heart performance. It is better to give a higher number of gc
    # interval if there is enough disk capacity.
    gcWaitTime=300000
    
    # Enable topic auto creation if new producer or consumer connected (disable auto creation with value false)
    allowAutoTopicCreation=true
    
    # The type of topic that is allowed to be automatically created.(partitioned/non-partitioned)
    allowAutoTopicCreationType=non-partitioned
    
    # Enable subscription auto creation if new consumer connected (disable auto creation with value false)
    allowAutoSubscriptionCreation=true
    
    # The number of partitioned topics that is allowed to be automatically created if allowAutoTopicCreationType is partitioned.
    defaultNumPartitions=1
    
    ### --- Transaction config variables --- ###
    # Enable transaction coordinator in broker
    transactionCoordinatorEnabled=false
    transactionMetadataStoreProviderClassName=org.apache.pulsar.transaction.coordinator.impl.MLTransactionMetadataStoreProvider
    
    # Transaction buffer take snapshot transaction count
    transactionBufferSnapshotMaxTransactionCount=1000
    
    # Transaction buffer take snapshot interval time
    # Unit : millisecond
    transactionBufferSnapshotMinTimeInMillis=5000
    
    ### --- Packages management service configuration variables (begin) --- ###
    
    # Enable the packages management service or not
    enablePackagesManagement=false
    
    # The packages management service storage service provide
    packagesManagementStorageProvider=org.apache.pulsar.packages.management.storage.bookkeeper.BookKeeperPackagesStorageProvider
    
    # When the packages storage provider is bookkeeper, you can use this configuration to
    # control the number of replicas for storing the package
    packagesReplicas=1
    
    # The bookkeeper ledger root path
    packagesManagementLedgerRootPath=/ledgers
    
    ### --- Packages management service configuration variables (end) --- ###
    
    ### --- Deprecated settings --- ###
    
    # These settings are left here for compatibility
    
    # Zookeeper session timeout in milliseconds
    # Deprecated: use metadataStoreSessionTimeoutMillis
    zooKeeperSessionTimeoutMillis=-1
    
    # ZooKeeper operation timeout in seconds
    # Deprecated: use metadataStoreOperationTimeoutSeconds
    zooKeeperOperationTimeoutSeconds=-1
    
    # ZooKeeper cache expiry time in seconds
    # Deprecated: use metadataStoreCacheExpirySeconds
    zooKeeperCacheExpirySeconds=-1
    
    messagingProtocols=rocketmq
    protocolHandlerDirectory=/root/rop/rocketmq-impl/target
    loadManagerClassName=org.apache.pulsar.broker.loadbalance.impl.ModularLoadManagerImpl
    
    rocketmqListeners=rocketmq://127.0.0.1:9876
    advertisedListeners=INTERNAL:pulsar://127.0.0.1:6650,INTERNAL_ROP:pulsar://127.0.0.1:9876
    rocketmqListenerPortMap=9876:INTERNAL_ROP
    
    brokerEntryMetadataInterceptors=org.apache.pulsar.common.intercept.AppendIndexMetadataInterceptor
    
    brokerDeleteInactiveTopicsEnabled=false
    

    Error: 截屏2022-09-23 下午1 45 36

    opened by tianshimoyi 0
  • Bump fastjson from 1.2.76 to 1.2.83

    Bump fastjson from 1.2.76 to 1.2.83

    Bumps fastjson from 1.2.76 to 1.2.83.

    Release notes

    Sourced from fastjson's releases.

    FASTJSON 1.2.83版本发布(安全修复)

    这是一个安全修复版本,修复最近收到在特定场景下可以绕过autoType关闭限制的漏洞,建议fastjson用户尽快采取安全措施保障系统安全。

    安全修复方案https://github.com/alibaba/fastjson/wiki/security_update_20220523

    Issues

    1. 安全加固
    2. 修复JDK17下setAccessible报错的问题 #4077

    fastjson 1.2.79版本发布,BUG修复

    这又是一个bug fixed的版本,大家按需升级

    Issues

    1. 修复引入MethodInheritanceComparator导致某些场景序列化报错的问题
    2. 增强JDK 9兼容
    3. 修复JSONArray/JSONObject的equals方法在内部对象map/list相同时不直接返回true的问题

    相关链接

    Commits

    Dependabot compatibility score

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
    • @dependabot use these labels will set the current labels as the default for future PRs for this repo and language
    • @dependabot use these reviewers will set the current reviewers as the default for future PRs for this repo and language
    • @dependabot use these assignees will set the current assignees as the default for future PRs for this repo and language
    • @dependabot use this milestone will set the current milestone as the default for future PRs for this repo and language

    You can disable automated security fix PRs for this repo from the Security Alerts page.

    dependencies 
    opened by dependabot[bot] 0
  • Standalone ROP Shutdown

    Standalone ROP Shutdown

    pulsar versiono 2.10.0

    logs

    pulsar_1          | 2022-05-24T07:52:30,228+0000 [pulsar-io-29-2] INFO  org.apache.pulsar.client.impl.ProducerImpl - [persistent://rocketmq/offset/__consumer_offsets-partition-9] [standalone-18-9] Closed Producer
    pulsar_1          | 2022-05-24T07:52:30,228+0000 [Rop-group-offset-reader] INFO  org.streamnative.pulsar.handlers.rocketmq.inner.consumer.metadata.GroupMetaManager - Start load group offset.
    pulsar_1          | 2022-05-24T07:52:30,229+0000 [main] INFO  org.streamnative.pulsar.handlers.rocketmq.inner.consumer.metadata.GroupMetaManager - Start GroupMetaManager service successfully.
    pulsar_1          | 2022-05-24T07:52:30,230+0000 [NettyEventExecutor] INFO  org.streamnative.pulsar.handlers.rocketmq.inner.NettyRemotingAbstract - NettyEventExecutor service started
    pulsar_1          | 2022-05-24T07:52:30,239+0000 [main] ERROR org.apache.pulsar.PulsarStandaloneStarter - Failed to start pulsar service.
    pulsar_1          | java.lang.NoSuchMethodError: 'org.apache.pulsar.zookeeper.ZooKeeperClientFactory org.apache.pulsar.broker.PulsarService.getZkClientFactory()'
    pulsar_1          |     at org.streamnative.pulsar.handlers.rocketmq.inner.proxy.RopBrokerProxy.start(RopBrokerProxy.java:689) ~[?:?]
    pulsar_1          |     at org.streamnative.pulsar.handlers.rocketmq.inner.RocketMQBrokerController.start(RocketMQBrokerController.java:578) ~[?:?]
    pulsar_1          |     at org.streamnative.pulsar.handlers.rocketmq.RocketMQProtocolHandler.start(RocketMQProtocolHandler.java:123) ~[?:?]
    pulsar_1          |     at org.apache.pulsar.broker.protocol.ProtocolHandlerWithClassLoader.start(ProtocolHandlerWithClassLoader.java:76) ~[org.apache.pulsar-pulsar-broker-2.10.0.jar:2.10.0]
    pulsar_1          |     at org.apache.pulsar.broker.protocol.ProtocolHandlers.lambda$start$4(ProtocolHandlers.java:149) ~[org.apache.pulsar-pulsar-broker-2.10.0.jar:2.10.0]
    pulsar_1          |     at java.lang.Iterable.forEach(Iterable.java:75) ~[?:?]
    pulsar_1          |     at org.apache.pulsar.broker.protocol.ProtocolHandlers.start(ProtocolHandlers.java:149) ~[org.apache.pulsar-pulsar-broker-2.10.0.jar:2.10.0]
    pulsar_1          |     at org.apache.pulsar.broker.PulsarService.start(PulsarService.java:780) ~[org.apache.pulsar-pulsar-broker-2.10.0.jar:2.10.0]
    pulsar_1          |     at org.apache.pulsar.PulsarStandalone.start(PulsarStandalone.java:301) ~[org.apache.pulsar-pulsar-broker-2.10.0.jar:2.10.0]
    pulsar_1          |     at org.apache.pulsar.PulsarStandaloneStarter.main(PulsarStandaloneStarter.java:139) [org.apache.pulsar-pulsar-broker-2.10.0.jar:2.10.0]
    pulsar_1          | 2022-05-24T07:52:30,396+0000 [pulsar-io-29-2] INFO  org.apache.pulsar.client.impl.ConsumerImpl - [persistent://rocketmq/offset/__consumer_offsets-partition-0][multiTopicsReader-2601b84b42] Subscribing to topic on cnx [id: 0x38ee9e61, L:/127.0.0.1:46522 - R:127.0.0.1/127.0.0.1:6650], consumerId 0
    pulsar_1          | 2022-05-24T07:52:30,400+0000 [pulsar-io-29-2] INFO  org.apache.pulsar.client.impl.ConsumerImpl - [persistent://rocketmq/offset/__consumer_offsets-partition-1][multiTopicsReader-2601b84b42] Subscribing to topic on cnx [id: 0x38ee9e61, L:/127.0.0.1:46522 - R:127.0.0.1/127.0.0.1:6650], consumerId 1
    pulsar_1          | 2022-05-24T07:52:30,400+0000 [pulsar-io-29-2] INFO  org.apache.pulsar.client.impl.ConsumerImpl - [persistent://rocketmq/offset/__consumer_offsets-partition-2][multiTopicsReader-2601b84b42] Subscribing to topic on cnx [id: 0x38ee9e61, L:/127.0.0.1:46522 - R:127.0.0.1/127.0.0.1:6650], consumerId 2
    pulsar_1          | 2022-05-24T07:52:30,401+0000 [pulsar-io-29-2] INFO  org.apache.pulsar.client.impl.ConsumerImpl - [persistent://rocketmq/offset/__consumer_offsets-partition-3][multiTopicsReader-2601b84b42] Subscribing to topic on cnx [id: 0x38ee9e61, L:/127.0.0.1:46522 - R:127.0.0.1/127.0.0.1:6650], consumerId 3
    pulsar_1          | 2022-05-24T07:52:30,401+0000 [pulsar-io-29-2] INFO  org.apache.pulsar.client.impl.ConsumerImpl - [persistent://rocketmq/offset/__consumer_offsets-partition-4][multiTopicsReader-2601b84b42] Subscribing to topic on cnx [id: 0x38ee9e61, L:/127.0.0.1:46522 - R:127.0.0.1/127.0.0.1:6650], consumerId 4
    pulsar_1          | 2022-05-24T07:52:30,402+0000 [pulsar-io-29-2] INFO  org.apache.pulsar.client.impl.ConsumerImpl - [persistent://rocketmq/offset/__consumer_offsets-partition-5][multiTopicsReader-2601b84b42] Subscribing to topic on cnx [id: 0x38ee9e61, L:/127.0.0.1:46522 - R:127.0.0.1/127.0.0.1:6650], consumerId 5
    pulsar_1          | 2022-05-24T07:52:30,402+0000 [pulsar-io-29-2] INFO  org.apache.pulsar.client.impl.ConsumerImpl - [persistent://rocketmq/offset/__consumer_offsets-partition-6][multiTopicsReader-2601b84b42] Subscribing to topic on cnx [id: 0x38ee9e61, L:/127.0.0.1:46522 - R:127.0.0.1/127.0.0.1:6650], consumerId 6
    pulsar_1          | 2022-05-24T07:52:30,402+0000 [pulsar-io-29-2] INFO  org.apache.pulsar.client.impl.ConsumerImpl - [persistent://rocketmq/offset/__consumer_offsets-partition-7][multiTopicsReader-2601b84b42] Subscribing to topic on cnx [id: 0x38ee9e61, L:/127.0.0.1:46522 - R:127.0.0.1/127.0.0.1:6650], consumerId 7
    pulsar_1          | 2022-05-24T07:52:30,403+0000 [pulsar-io-29-2] INFO  org.apache.pulsar.client.impl.ConsumerImpl - [persistent://rocketmq/offset/__consumer_offsets-partition-8][multiTopicsReader-2601b84b42] Subscribing to topic on cnx [id: 0x38ee9e61, L:/127.0.0.1:46522 - R:127.0.0.1/127.0.0.1:6650], consumerId 8
    pulsar_1          | 2022-05-24T07:52:30,403+0000 [pulsar-io-29-2] INFO  org.apache.pulsar.client.impl.ConsumerImpl - [persistent://rocketmq/offset/__consumer_offsets-partition-9][multiTopicsReader-2601b84b42] Subscribing to topic on cnx [id: 0x38ee9e61, L:/127.0.0.1:46522 - R:127.0.0.1/127.0.0.1:6650], consumerId 9
    pulsar_1          | 2022-05-24T07:52:30,428+0000 [pulsar-io-29-3] INFO  org.apache.bookkeeper.mledger.impl.NonDurableCursorImpl - [rocketmq/offset/persistent/__consumer_offsets-partition-0] Created non-durable cursor read-position=6:0 mark-delete-position=6:-1
    pulsar_1          | 2022-05-24T07:52:30,445+0000 [pulsar-io-29-2] INFO  org.apache.pulsar.client.impl.ConsumerImpl - [persistent://rocketmq/offset/__consumer_offsets-partition-0][multiTopicsReader-2601b84b42] Subscribed to topic on 127.0.0.1/127.0.0.1:6650 -- consumer: 0
    pulsar_1          | 2022-05-24T07:52:30,446+0000 [pulsar-io-29-3] INFO  org.apache.bookkeeper.mledger.impl.NonDurableCursorImpl - [rocketmq/offset/persistent/__consumer_offsets-partition-1] Created non-durable cursor read-position=9:0 mark-delete-position=9:-1
    pulsar_1          | 2022-05-24T07:52:30,448+0000 [pulsar-io-29-2] INFO  org.apache.pulsar.client.impl.ConsumerImpl - [persistent://rocketmq/offset/__consumer_offsets-partition-1][multiTopicsReader-2601b84b42] Subscribed to topic on 127.0.0.1/127.0.0.1:6650 -- consumer: 1
    pulsar_1          | 2022-05-24T07:52:30,448+0000 [pulsar-io-29-3] INFO  org.apache.bookkeeper.mledger.impl.NonDurableCursorImpl - [rocketmq/offset/persistent/__consumer_offsets-partition-2] Created non-durable cursor read-position=12:0 mark-delete-position=12:-1
    pulsar_1          | 2022-05-24T07:52:30,449+0000 [pulsar-io-29-2] INFO  org.apache.pulsar.client.impl.ConsumerImpl - [persistent://rocketmq/offset/__consumer_offsets-partition-2][multiTopicsReader-2601b84b42] Subscribed to topic on 127.0.0.1/127.0.0.1:6650 -- consumer: 2
    pulsar_1          | 2022-05-24T07:52:30,449+0000 [pulsar-io-29-3] INFO  org.apache.bookkeeper.mledger.impl.NonDurableCursorImpl - [rocketmq/offset/persistent/__consumer_offsets-partition-3] Created non-durable cursor read-position=14:0 mark-delete-position=14:-1
    pulsar_1          | 2022-05-24T07:52:30,450+0000 [pulsar-io-29-3] INFO  org.apache.bookkeeper.mledger.impl.NonDurableCursorImpl - [rocketmq/offset/persistent/__consumer_offsets-partition-4] Created non-durable cursor read-position=8:0 mark-delete-position=8:-1
    pulsar_1          | 2022-05-24T07:52:30,450+0000 [pulsar-io-29-2] INFO  org.apache.pulsar.client.impl.ConsumerImpl - [persistent://rocketmq/offset/__consumer_offsets-partition-3][multiTopicsReader-2601b84b42] Subscribed to topic on 127.0.0.1/127.0.0.1:6650 -- consumer: 3
    pulsar_1          | 2022-05-24T07:52:30,451+0000 [pulsar-io-29-2] INFO  org.apache.pulsar.client.impl.ConsumerImpl - [persistent://rocketmq/offset/__consumer_offsets-partition-4][multiTopicsReader-2601b84b42] Subscribed to topic on 127.0.0.1/127.0.0.1:6650 -- consumer: 4
    pulsar_1          | 2022-05-24T07:52:30,451+0000 [pulsar-io-29-3] INFO  org.apache.bookkeeper.mledger.impl.NonDurableCursorImpl - [rocketmq/offset/persistent/__consumer_offsets-partition-5] Created non-durable cursor read-position=10:0 mark-delete-position=10:-1
    pulsar_1          | 2022-05-24T07:52:30,452+0000 [pulsar-io-29-2] INFO  org.apache.pulsar.client.impl.ConsumerImpl - [persistent://rocketmq/offset/__consumer_offsets-partition-5][multiTopicsReader-2601b84b42] Subscribed to topic on 127.0.0.1/127.0.0.1:6650 -- consumer: 5
    pulsar_1          | 2022-05-24T07:52:30,453+0000 [pulsar-io-29-3] INFO  org.apache.bookkeeper.mledger.impl.NonDurableCursorImpl - [rocketmq/offset/persistent/__consumer_offsets-partition-6] Created non-durable cursor read-position=13:0 mark-delete-position=13:-1
    pulsar_1          | 2022-05-24T07:52:30,454+0000 [pulsar-io-29-2] INFO  org.apache.pulsar.client.impl.ConsumerImpl - [persistent://rocketmq/offset/__consumer_offsets-partition-6][multiTopicsReader-2601b84b42] Subscribed to topic on 127.0.0.1/127.0.0.1:6650 -- consumer: 6
    pulsar_1          | 2022-05-24T07:52:30,454+0000 [pulsar-io-29-3] INFO  org.apache.bookkeeper.mledger.impl.NonDurableCursorImpl - [rocketmq/offset/persistent/__consumer_offsets-partition-7] Created non-durable cursor read-position=15:0 mark-delete-position=15:-1
    pulsar_1          | 2022-05-24T07:52:30,456+0000 [pulsar-io-29-2] INFO  org.apache.pulsar.client.impl.ConsumerImpl - [persistent://rocketmq/offset/__consumer_offsets-partition-7][multiTopicsReader-2601b84b42] Subscribed to topic on 127.0.0.1/127.0.0.1:6650 -- consumer: 7
    pulsar_1          | 2022-05-24T07:52:30,456+0000 [pulsar-io-29-3] INFO  org.apache.bookkeeper.mledger.impl.NonDurableCursorImpl - [rocketmq/offset/persistent/__consumer_offsets-partition-8] Created non-durable cursor read-position=7:0 mark-delete-position=7:-1
    pulsar_1          | 2022-05-24T07:52:30,457+0000 [pulsar-io-29-2] INFO  org.apache.pulsar.client.impl.ConsumerImpl - [persistent://rocketmq/offset/__consumer_offsets-partition-8][multiTopicsReader-2601b84b42] Subscribed to topic on 127.0.0.1/127.0.0.1:6650 -- consumer: 8
    pulsar_1          | 2022-05-24T07:52:30,457+0000 [pulsar-io-29-3] INFO  org.apache.bookkeeper.mledger.impl.NonDurableCursorImpl - [rocketmq/offset/persistent/__consumer_offsets-partition-9] Created non-durable cursor read-position=11:0 mark-delete-position=11:-1
    pulsar_1          | 2022-05-24T07:52:30,458+0000 [pulsar-io-29-2] INFO  org.apache.pulsar.client.impl.ConsumerImpl - [persistent://rocketmq/offset/__consumer_offsets-partition-9][multiTopicsReader-2601b84b42] Subscribed to topic on 127.0.0.1/127.0.0.1:6650 -- consumer: 9
    pulsar_1          | 2022-05-24T07:52:30,508+0000 [Thread-0] INFO  org.apache.pulsar.broker.service.Consumer - Disconnecting consumer: Consumer{subscription=PersistentSubscription{topic=persistent://rocketmq/offset/__consumer_offsets-partition-6, name=multiTopicsReader-2601b84b42}, consumerId=6, consumerName=4d14e, address=/127.0.0.1:46522}
    pulsar_1          | 2022-05-24T07:52:30,512+0000 [Thread-0] INFO  org.apache.pulsar.broker.service.AbstractDispatcherSingleActiveConsumer - Removing consumer Consumer{subscription=PersistentSubscription{topic=persistent://rocketmq/offset/__consumer_offsets-partition-6, name=multiTopicsReader-2601b84b42}, consumerId=6, consumerName=4d14e, address=/127.0.0.1:46522}
    pulsar_1          | 2022-05-24T07:52:30,512+0000 [Thread-0] INFO  org.apache.pulsar.broker.service.persistent.PersistentSubscription - [persistent://rocketmq/offset/__consumer_offsets-partition-6][multiTopicsReader-2601b84b42] Successfully closed subscription [NonDurableCursorImpl{ledger=rocketmq/offset/persistent/__consumer_offsets-partition-6, ackPos=13:-1, readPos=76:0}]
    pulsar_1          | 2022-05-24T07:52:30,513+0000 [Thread-0] INFO  org.apache.pulsar.broker.service.persistent.PersistentSubscription - [persistent://rocketmq/offset/__consumer_offsets-partition-6][multiTopicsReader-2601b84b42] Successfully closed dispatcher for reader
    pulsar_1          | 2022-05-24T07:52:30,514+0000 [Thread-0] INFO  org.apache.pulsar.broker.service.persistent.PersistentSubscription - [persistent://rocketmq/offset/__consumer_offsets-partition-6][multiTopicsReader-2601b84b42] Successfully closed subscription [NonDurableCursorImpl{ledger=rocketmq/offset/persistent/__consumer_offsets-partition-6, ackPos=13:-1, readPos=76:0}]
    pulsar_1          | 2022-05-24T07:52:30,514+0000 [Thread-0] INFO  org.apache.pulsar.broker.service.persistent.PersistentSubscription - [persistent://rocketmq/offset/__consumer_offsets-partition-6][multiTopicsReader-2601b84b42] Successfully disconnected and closed subscription
    pulsar_1          | 2022-05-24T07:52:30,517+0000 [Thread-0] INFO  org.apache.pulsar.broker.service.Consumer - Disconnecting consumer: Consumer{subscription=PersistentSubscription{topic=persistent://rocketmq/offset/__consumer_offsets-partition-2, name=multiTopicsReader-2601b84b42}, consumerId=2, consumerName=4d14e, address=/127.0.0.1:46522}
    pulsar_1          | 2022-05-24T07:52:30,518+0000 [Thread-0] INFO  org.apache.pulsar.broker.service.AbstractDispatcherSingleActiveConsumer - Removing consumer Consumer{subscription=PersistentSubscription{topic=persistent://rocketmq/offset/__consumer_offsets-partition-2, name=multiTopicsReader-2601b84b42}, consumerId=2, consumerName=4d14e, address=/127.0.0.1:46522}
    pulsar_1          | 2022-05-24T07:52:30,518+0000 [Thread-0] INFO  org.apache.pulsar.broker.service.persistent.PersistentSubscription - [persistent://rocketmq/offset/__consumer_offsets-partition-2][multiTopicsReader-2601b84b42] Successfully closed subscription [NonDurableCursorImpl{ledger=rocketmq/offset/persistent/__consumer_offsets-partition-2, ackPos=12:-1, readPos=92:0}]
    pulsar_1          | 2022-05-24T07:52:30,518+0000 [Thread-0] INFO  org.apache.pulsar.broker.service.persistent.PersistentSubscription - [persistent://rocketmq/offset/__consumer_offsets-partition-2][multiTopicsReader-2601b84b42] Successfully closed dispatcher for reader
    pulsar_1          | 2022-05-24T07:52:30,519+0000 [Thread-0] INFO  org.apache.pulsar.broker.service.persistent.PersistentSubscription - [persistent://rocketmq/offset/__consumer_offsets-partition-2][multiTopicsReader-2601b84b42] Successfully closed subscription [NonDurableCursorImpl{ledger=rocketmq/offset/persistent/__consumer_offsets-partition-2, ackPos=12:-1, readPos=92:0}]
    pulsar_1          | 2022-05-24T07:52:30,519+0000 [Thread-0] INFO  org.apache.pulsar.broker.service.persistent.PersistentSubscription - [persistent://rocketmq/offset/__consumer_offsets-partition-2][multiTopicsReader-2601b84b42] Successfully disconnected and closed subscription
    pulsar_1          | 2022-05-24T07:52:30,554+0000 [Thread-0] INFO  org.apache.pulsar.broker.service.Consumer - Disconnecting consumer: Consumer{subscription=PersistentSubscription{topic=persistent://rocketmq/offset/__consumer_offsets-partition-8, name=multiTopicsReader-2601b84b42}, consumerId=8, consumerName=4d14e, address=/127.0.0.1:46522}
    pulsar_1          | 2022-05-24T07:52:30,554+0000 [Thread-0] INFO  org.apache.pulsar.broker.service.AbstractDispatcherSingleActiveConsumer - Removing consumer Consumer{subscription=PersistentSubscription{topic=persistent://rocketmq/offset/__consumer_offsets-partition-8, name=multiTopicsReader-2601b84b42}, consumerId=8, consumerName=4d14e, address=/127.0.0.1:46522}
    pulsar_1          | 2022-05-24T07:52:30,555+0000 [Thread-0] INFO  org.apache.pulsar.broker.service.persistent.PersistentSubscription - [persistent://rocketmq/offset/__consumer_offsets-partition-8][multiTopicsReader-2601b84b42] Successfully closed subscription [NonDurableCursorImpl{ledger=rocketmq/offset/persistent/__consumer_offsets-partition-8, ackPos=7:-1, readPos=167:0}]
    pulsar_1          | 2022-05-24T07:52:30,555+0000 [Thread-0] INFO  org.apache.pulsar.broker.service.persistent.PersistentSubscription - [persistent://rocketmq/offset/__consumer_offsets-partition-8][multiTopicsReader-2601b84b42] Successfully closed dispatcher for reader
    pulsar_1          | 2022-05-24T07:52:30,555+0000 [Thread-0] INFO  org.apache.pulsar.broker.service.persistent.PersistentSubscription - [persistent://rocketmq/offset/__consumer_offsets-partition-8][multiTopicsReader-2601b84b42] Successfully closed subscription [NonDurableCursorImpl{ledger=rocketmq/offset/persistent/__consumer_offsets-partition-8, ackPos=7:-1, readPos=182:0}]
    pulsar_1          | 2022-05-24T07:52:30,555+0000 [Thread-0] INFO  org.apache.pulsar.broker.service.persistent.PersistentSubscription - [persistent://rocketmq/offset/__consumer_offsets-partition-8][multiTopicsReader-2601b84b42] Successfully disconnected and closed subscription
    pulsar_1          | 2022-05-24T07:52:30,557+0000 [Thread-0] INFO  org.apache.pulsar.broker.service.Consumer - Disconnecting consumer: Consumer{subscription=PersistentSubscription{topic=persistent://rocketmq/offset/__consumer_offsets-partition-0, name=multiTopicsReader-2601b84b42}, consumerId=0, consumerName=4d14e, address=/127.0.0.1:46522}
    pulsar_1          | 2022-05-24T07:52:30,557+0000 [Thread-0] INFO  org.apache.pulsar.broker.service.AbstractDispatcherSingleActiveConsumer - Removing consumer Consumer{subscription=PersistentSubscription{topic=persistent://rocketmq/offset/__consumer_offsets-partition-0, name=multiTopicsReader-2601b84b42}, consumerId=0, consumerName=4d14e, address=/127.0.0.1:46522}
    pulsar_1          | 2022-05-24T07:52:30,557+0000 [Thread-0] INFO  org.apache.pulsar.broker.service.persistent.PersistentSubscription - [persistent://rocketmq/offset/__consumer_offsets-partition-0][multiTopicsReader-2601b84b42] Successfully closed subscription [NonDurableCursorImpl{ledger=rocketmq/offset/persistent/__consumer_offsets-partition-0, ackPos=6:-1, readPos=150:0}]
    pulsar_1          | 2022-05-24T07:52:30,557+0000 [Thread-0] INFO  org.apache.pulsar.broker.service.persistent.PersistentSubscription - [persistent://rocketmq/offset/__consumer_offsets-partition-0][multiTopicsReader-2601b84b42] Successfully closed dispatcher for reader
    pulsar_1          | 2022-05-24T07:52:30,558+0000 [Thread-0] INFO  org.apache.pulsar.broker.service.persistent.PersistentSubscription - [persistent://rocketmq/offset/__consumer_offsets-partition-0][multiTopicsReader-2601b84b42] Successfully closed subscription [NonDurableCursorImpl{ledger=rocketmq/offset/persistent/__consumer_offsets-partition-0, ackPos=6:-1, readPos=150:0}]
    pulsar_1          | 2022-05-24T07:52:30,558+0000 [Thread-0] INFO  org.apache.pulsar.broker.service.persistent.PersistentSubscription - [persistent://rocketmq/offset/__consumer_offsets-partition-0][multiTopicsReader-2601b84b42] Successfully disconnected and closed subscription
    pulsar_1          | 2022-05-24T07:52:30,559+0000 [Thread-0] INFO  org.apache.pulsar.broker.service.Consumer - Disconnecting consumer: Consumer{subscription=PersistentSubscription{topic=persistent://rocketmq/offset/__consumer_offsets-partition-4, name=multiTopicsReader-2601b84b42}, consumerId=4, consumerName=4d14e, address=/127.0.0.1:46522}
    pulsar_1          | 2022-05-24T07:52:30,560+0000 [Thread-0] INFO  org.apache.pulsar.broker.service.AbstractDispatcherSingleActiveConsumer - Removing consumer Consumer{subscription=PersistentSubscription{topic=persistent://rocketmq/offset/__consumer_offsets-partition-4, name=multiTopicsReader-2601b84b42}, consumerId=4, consumerName=4d14e, address=/127.0.0.1:46522}
    pulsar_1          | 2022-05-24T07:52:30,560+0000 [Thread-0] INFO  org.apache.pulsar.broker.service.persistent.PersistentSubscription - [persistent://rocketmq/offset/__consumer_offsets-partition-4][multiTopicsReader-2601b84b42] Successfully closed subscription [NonDurableCursorImpl{ledger=rocketmq/offset/persistent/__consumer_offsets-partition-4, ackPos=8:-1, readPos=183:0}]
    pulsar_1          | 2022-05-24T07:52:30,560+0000 [Thread-0] INFO  org.apache.pulsar.broker.service.persistent.PersistentSubscription - [persistent://rocketmq/offset/__consumer_offsets-partition-4][multiTopicsReader-2601b84b42] Successfully closed dispatcher for reader
    pulsar_1          | 2022-05-24T07:52:30,561+0000 [Thread-0] INFO  org.apache.pulsar.broker.service.persistent.PersistentSubscription - [persistent://rocketmq/offset/__consumer_offsets-partition-4][multiTopicsReader-2601b84b42] Successfully closed subscription [NonDurableCursorImpl{ledger=rocketmq/offset/persistent/__consumer_offsets-partition-4, ackPos=8:-1, readPos=183:0}]
    pulsar_1          | 2022-05-24T07:52:30,561+0000 [Thread-0] INFO  org.apache.pulsar.broker.service.persistent.PersistentSubscription - [persistent://rocketmq/offset/__consumer_offsets-partition-4][multiTopicsReader-2601b84b42] Successfully disconnected and closed subscription
    pulsar_1          | 2022-05-24T07:52:30,574+0000 [Thread-0] INFO  org.apache.pulsar.broker.service.Consumer - Disconnecting consumer: Consumer{subscription=PersistentSubscription{topic=persistent://rocketmq/offset/__consumer_offsets-partition-5, name=multiTopicsReader-2601b84b42}, consumerId=5, consumerName=4d14e, address=/127.0.0.1:46522}
    pulsar_1          | 2022-05-24T07:52:30,575+0000 [Thread-0] INFO  org.apache.pulsar.broker.service.AbstractDispatcherSingleActiveConsumer - Removing consumer Consumer{subscription=PersistentSubscription{topic=persistent://rocketmq/offset/__consumer_offsets-partition-5, name=multiTopicsReader-2601b84b42}, consumerId=5, consumerName=4d14e, address=/127.0.0.1:46522}
    pulsar_1          | 2022-05-24T07:52:30,575+0000 [Thread-0] INFO  org.apache.pulsar.broker.service.persistent.PersistentSubscription - [persistent://rocketmq/offset/__consumer_offsets-partition-5][multiTopicsReader-2601b84b42] Successfully closed subscription [NonDurableCursorImpl{ledger=rocketmq/offset/persistent/__consumer_offsets-partition-5, ackPos=10:-1, readPos=201:0}]
    pulsar_1          | 2022-05-24T07:52:30,575+0000 [Thread-0] INFO  org.apache.pulsar.broker.service.persistent.PersistentSubscription - [persistent://rocketmq/offset/__consumer_offsets-partition-5][multiTopicsReader-2601b84b42] Successfully closed dispatcher for reader
    pulsar_1          | 2022-05-24T07:52:30,575+0000 [Thread-0] INFO  org.apache.pulsar.broker.service.persistent.PersistentSubscription - [persistent://rocketmq/offset/__consumer_offsets-partition-5][multiTopicsReader-2601b84b42] Successfully closed subscription [NonDurableCursorImpl{ledger=rocketmq/offset/persistent/__consumer_offsets-partition-5, ackPos=10:-1, readPos=201:0}]
    pulsar_1          | 2022-05-24T07:52:30,575+0000 [Thread-0] INFO  org.apache.pulsar.broker.service.persistent.PersistentSubscription - [persistent://rocketmq/offset/__consumer_offsets-partition-5][multiTopicsReader-2601b84b42] Successfully disconnected and closed subscription
    pulsar_1          | 2022-05-24T07:52:30,577+0000 [Thread-0] INFO  org.apache.pulsar.broker.service.Consumer - Disconnecting consumer: Consumer{subscription=PersistentSubscription{topic=persistent://rocketmq/offset/__consumer_offsets-partition-1, name=multiTopicsReader-2601b84b42}, consumerId=1, consumerName=4d14e, address=/127.0.0.1:46522}
    pulsar_1          | 2022-05-24T07:52:30,577+0000 [Thread-0] INFO  org.apache.pulsar.broker.service.AbstractDispatcherSingleActiveConsumer - Removing consumer Consumer{subscription=PersistentSubscription{topic=persistent://rocketmq/offset/__consumer_offsets-partition-1, name=multiTopicsReader-2601b84b42}, consumerId=1, consumerName=4d14e, address=/127.0.0.1:46522}
    pulsar_1          | 2022-05-24T07:52:30,577+0000 [Thread-0] INFO  org.apache.pulsar.broker.service.persistent.PersistentSubscription - [persistent://rocketmq/offset/__consumer_offsets-partition-1][multiTopicsReader-2601b84b42] Successfully closed subscription [NonDurableCursorImpl{ledger=rocketmq/offset/persistent/__consumer_offsets-partition-1, ackPos=9:-1, readPos=217:0}]
    pulsar_1          | 2022-05-24T07:52:30,577+0000 [Thread-0] INFO  org.apache.pulsar.broker.service.persistent.PersistentSubscription - [persistent://rocketmq/offset/__consumer_offsets-partition-1][multiTopicsReader-2601b84b42] Successfully closed dispatcher for reader
    pulsar_1          | 2022-05-24T07:52:30,577+0000 [Thread-0] INFO  org.apache.pulsar.broker.service.persistent.PersistentSubscription - [persistent://rocketmq/offset/__consumer_offsets-partition-1][multiTopicsReader-2601b84b42] Successfully closed subscription [NonDurableCursorImpl{ledger=rocketmq/offset/persistent/__consumer_offsets-partition-1, ackPos=9:-1, readPos=217:0}]
    pulsar_1          | 2022-05-24T07:52:30,577+0000 [Thread-0] INFO  org.apache.pulsar.broker.service.persistent.PersistentSubscription - [persistent://rocketmq/offset/__consumer_offsets-partition-1][multiTopicsReader-2601b84b42] Successfully disconnected and closed subscription
    pulsar_1          | 2022-05-24T07:52:30,578+0000 [Thread-0] INFO  org.apache.pulsar.broker.service.Consumer - Disconnecting consumer: Consumer{subscription=PersistentSubscription{topic=persistent://rocketmq/offset/__consumer_offsets-partition-9, name=multiTopicsReader-2601b84b42}, consumerId=9, consumerName=4d14e, address=/127.0.0.1:46522}
    pulsar_1          | 2022-05-24T07:52:30,578+0000 [Thread-0] INFO  org.apache.pulsar.broker.service.AbstractDispatcherSingleActiveConsumer - Removing consumer Consumer{subscription=PersistentSubscription{topic=persistent://rocketmq/offset/__consumer_offsets-partition-9, name=multiTopicsReader-2601b84b42}, consumerId=9, consumerName=4d14e, address=/127.0.0.1:46522}
    pulsar_1          | 2022-05-24T07:52:30,578+0000 [Thread-0] INFO  org.apache.pulsar.broker.service.persistent.PersistentSubscription - [persistent://rocketmq/offset/__consumer_offsets-partition-9][multiTopicsReader-2601b84b42] Successfully closed subscription [NonDurableCursorImpl{ledger=rocketmq/offset/persistent/__consumer_offsets-partition-9, ackPos=11:-1, readPos=219:0}]
    pulsar_1          | 2022-05-24T07:52:30,578+0000 [Thread-0] INFO  org.apache.pulsar.broker.service.persistent.PersistentSubscription - [persistent://rocketmq/offset/__consumer_offsets-partition-9][multiTopicsReader-2601b84b42] Successfully closed dispatcher for reader
    pulsar_1          | 2022-05-24T07:52:30,579+0000 [Thread-0] INFO  org.apache.pulsar.broker.service.persistent.PersistentSubscription - [persistent://rocketmq/offset/__consumer_offsets-partition-9][multiTopicsReader-2601b84b42] Successfully closed subscription [NonDurableCursorImpl{ledger=rocketmq/offset/persistent/__consumer_offsets-partition-9, ackPos=11:-1, readPos=219:0}]
    pulsar_1          | 2022-05-24T07:52:30,579+0000 [Thread-0] INFO  org.apache.pulsar.broker.service.persistent.PersistentSubscription - [persistent://rocketmq/offset/__consumer_offsets-partition-9][multiTopicsReader-2601b84b42] Successfully disconnected and closed subscription
    pulsar_1          | 2022-05-24T07:52:30,593+0000 [Thread-0] INFO  org.apache.pulsar.broker.service.Consumer - Disconnecting consumer: Consumer{subscription=PersistentSubscription{topic=persistent://rocketmq/offset/__consumer_offsets-partition-7, name=multiTopicsReader-2601b84b42}, consumerId=7, consumerName=4d14e, address=/127.0.0.1:46522}
    pulsar_1          | 2022-05-24T07:52:30,594+0000 [Thread-0] INFO  org.apache.pulsar.broker.service.AbstractDispatcherSingleActiveConsumer - Removing consumer Consumer{subscription=PersistentSubscription{topic=persistent://rocketmq/offset/__consumer_offsets-partition-7, name=multiTopicsReader-2601b84b42}, consumerId=7, consumerName=4d14e, address=/127.0.0.1:46522}
    pulsar_1          | 2022-05-24T07:52:30,594+0000 [Thread-0] INFO  org.apache.pulsar.broker.service.persistent.PersistentSubscription - [persistent://rocketmq/offset/__consumer_offsets-partition-7][multiTopicsReader-2601b84b42] Successfully closed subscription [NonDurableCursorImpl{ledger=rocketmq/offset/persistent/__consumer_offsets-partition-7, ackPos=15:-1, readPos=287:0}]
    pulsar_1          | 2022-05-24T07:52:30,594+0000 [Thread-0] INFO  org.apache.pulsar.broker.service.persistent.PersistentSubscription - [persistent://rocketmq/offset/__consumer_offsets-partition-7][multiTopicsReader-2601b84b42] Successfully closed dispatcher for reader
    pulsar_1          | 2022-05-24T07:52:30,594+0000 [Thread-0] INFO  org.apache.pulsar.broker.service.persistent.PersistentSubscription - [persistent://rocketmq/offset/__consumer_offsets-partition-7][multiTopicsReader-2601b84b42] Successfully closed subscription [NonDurableCursorImpl{ledger=rocketmq/offset/persistent/__consumer_offsets-partition-7, ackPos=15:-1, readPos=287:0}]
    pulsar_1          | 2022-05-24T07:52:30,594+0000 [Thread-0] INFO  org.apache.pulsar.broker.service.persistent.PersistentSubscription - [persistent://rocketmq/offset/__consumer_offsets-partition-7][multiTopicsReader-2601b84b42] Successfully disconnected and closed subscription
    pulsar_1          | 2022-05-24T07:52:30,596+0000 [Thread-0] INFO  org.apache.pulsar.broker.service.Consumer - Disconnecting consumer: Consumer{subscription=PersistentSubscription{topic=persistent://rocketmq/offset/__consumer_offsets-partition-3, name=multiTopicsReader-2601b84b42}, consumerId=3, consumerName=4d14e, address=/127.0.0.1:46522}
    pulsar_1          | 2022-05-24T07:52:30,596+0000 [Thread-0] INFO  org.apache.pulsar.broker.service.AbstractDispatcherSingleActiveConsumer - Removing consumer Consumer{subscription=PersistentSubscription{topic=persistent://rocketmq/offset/__consumer_offsets-partition-3, name=multiTopicsReader-2601b84b42}, consumerId=3, consumerName=4d14e, address=/127.0.0.1:46522}
    pulsar_1          | 2022-05-24T07:52:30,596+0000 [Thread-0] INFO  org.apache.pulsar.broker.service.persistent.PersistentSubscription - [persistent://rocketmq/offset/__consumer_offsets-partition-3][multiTopicsReader-2601b84b42] Successfully closed subscription [NonDurableCursorImpl{ledger=rocketmq/offset/persistent/__consumer_offsets-partition-3, ackPos=14:-1, readPos=270:0}]
    pulsar_1          | 2022-05-24T07:52:30,596+0000 [Thread-0] INFO  org.apache.pulsar.broker.service.persistent.PersistentSubscription - [persistent://rocketmq/offset/__consumer_offsets-partition-3][multiTopicsReader-2601b84b42] Successfully closed dispatcher for reader
    pulsar_1          | 2022-05-24T07:52:30,596+0000 [Thread-0] INFO  org.apache.pulsar.broker.service.persistent.PersistentSubscription - [persistent://rocketmq/offset/__consumer_offsets-partition-3][multiTopicsReader-2601b84b42] Successfully closed subscription [NonDurableCursorImpl{ledger=rocketmq/offset/persistent/__consumer_offsets-partition-3, ackPos=14:-1, readPos=270:0}]
    pulsar_1          | 2022-05-24T07:52:30,596+0000 [Thread-0] INFO  org.apache.pulsar.broker.service.persistent.PersistentSubscription - [persistent://rocketmq/offset/__consumer_offsets-partition-3][multiTopicsReader-2601b84b42] Successfully disconnected and closed subscription
    pulsar_1          | 2022-05-24T07:52:31,477+0000 [Thread-0] INFO  org.apache.pulsar.client.impl.ConsumerImpl - [persistent://rocketmq/offset/__consumer_offsets-partition-0] [multiTopicsReader-2601b84b42] Closed Consumer (not connected)
    pulsar_1          | 2022-05-24T07:52:31,478+0000 [Thread-0] INFO  org.apache.pulsar.client.impl.ConsumerImpl - [persistent://rocketmq/offset/__consumer_offsets-partition-9] [multiTopicsReader-2601b84b42] Closed Consumer (not connected)
    pulsar_1          | 2022-05-24T07:52:31,478+0000 [Thread-0] INFO  org.apache.pulsar.client.impl.ConsumerImpl - [persistent://rocketmq/offset/__consumer_offsets-partition-8] [multiTopicsReader-2601b84b42] Closed Consumer (not connected)
    pulsar_1          | 2022-05-24T07:52:31,478+0000 [Thread-0] INFO  org.apache.pulsar.client.impl.ConsumerImpl - [persistent://rocketmq/offset/__consumer_offsets-partition-7] [multiTopicsReader-2601b84b42] Closed Consumer (not connected)
    pulsar_1          | 2022-05-24T07:52:31,478+0000 [Thread-0] INFO  org.apache.pulsar.client.impl.ConsumerImpl - [persistent://rocketmq/offset/__consumer_offsets-partition-6] [multiTopicsReader-2601b84b42] Closed Consumer (not connected)
    pulsar_1          | 2022-05-24T07:52:31,478+0000 [Thread-0] INFO  org.apache.pulsar.client.impl.ConsumerImpl - [persistent://rocketmq/offset/__consumer_offsets-partition-5] [multiTopicsReader-2601b84b42] Closed Consumer (not connected)
    pulsar_1          | 2022-05-24T07:52:31,478+0000 [Thread-0] INFO  org.apache.pulsar.client.impl.ConsumerImpl - [persistent://rocketmq/offset/__consumer_offsets-partition-4] [multiTopicsReader-2601b84b42] Closed Consumer (not connected)
    pulsar_1          | 2022-05-24T07:52:31,478+0000 [Thread-0] INFO  org.apache.pulsar.client.impl.ConsumerImpl - [persistent://rocketmq/offset/__consumer_offsets-partition-3] [multiTopicsReader-2601b84b42] Closed Consumer (not connected)
    pulsar_1          | 2022-05-24T07:52:31,478+0000 [Thread-0] INFO  org.apache.pulsar.client.impl.ConsumerImpl - [persistent://rocketmq/offset/__consumer_offsets-partition-2] [multiTopicsReader-2601b84b42] Closed Consumer (not connected)
    pulsar_1          | 2022-05-24T07:52:31,479+0000 [Thread-0] INFO  org.apache.pulsar.client.impl.ConsumerImpl - [persistent://rocketmq/offset/__consumer_offsets-partition-1] [multiTopicsReader-2601b84b42] Closed Consumer (not connected)
    pulsar_1          | 2022-05-24T07:52:31,628+0000 [Thread-0] INFO  org.apache.pulsar.broker.service.GracefulExecutorServicesTerminationHandler - Starting termination handler for 4 executors.
    pulsar_1          | 2022-05-24T07:52:31,629+0000 [Thread-0] INFO  org.apache.pulsar.broker.service.GracefulExecutorServicesTerminationHandler - Shutdown completed.
    

    grep ERROR

    docker-compose logs -f | grep ERROR
    pulsar_1          | 2022-05-24T08:08:55,859+0000 [main] ERROR org.apache.bookkeeper.bookie.Journal - Problems reading from data/standalone/bookkeeper0/current/lastMark (this is okay if it is the first time starting this bookie
    pulsar_1          | 2022-05-24T08:08:58,096+0000 [client-scheduler-OrderedScheduler-6-0] ERROR org.apache.bookkeeper.clients.impl.internal.RootRangeClientImplWithRetries - Reason for the failure {}
    pulsar_1          | 2022-05-24T08:08:59,163+0000 [client-scheduler-OrderedScheduler-25-0] ERROR org.apache.bookkeeper.clients.impl.internal.RootRangeClientImplWithRetries - Reason for the failure {}
    pulsar_1          | 2022-05-24T08:09:07,390+0000 [main] ERROR org.apache.pulsar.PulsarStandaloneStarter - Failed to start pulsar service.
    pulsar_1          | 2022-05-24T08:09:20,521+0000 [client-scheduler-OrderedScheduler-8-0] ERROR org.apache.bookkeeper.clients.impl.internal.RootRangeClientImplWithRetries - Reason for the failure {}
    pulsar_1          | 2022-05-24T08:09:27,697+0000 [main] ERROR org.apache.pulsar.PulsarStandaloneStarter - Failed to start pulsar service.
    pulsar_1          | 2022-05-24T08:09:40,263+0000 [client-scheduler-OrderedScheduler-10-0] ERROR org.apache.bookkeeper.clients.impl.internal.RootRangeClientImplWithRetries - Reason for the failure {}
    pulsar_1          | 2022-05-24T08:09:47,554+0000 [main] ERROR org.apache.pulsar.PulsarStandaloneStarter - Failed to start pulsar service.
    pulsar_1          | 2022-05-24T08:10:00,495+0000 [client-scheduler-OrderedScheduler-26-0] ERROR org.apache.bookkeeper.clients.impl.internal.RootRangeClientImplWithRetries - Reason for the failure {}
    pulsar_1          | 2022-05-24T08:10:07,726+0000 [main] ERROR org.apache.pulsar.PulsarStandaloneStarter - Failed to start pulsar service.
    pulsar_1          | 2022-05-24T08:10:20,706+0000 [client-scheduler-OrderedScheduler-4-0] ERROR org.apache.bookkeeper.clients.impl.internal.RootRangeClientImplWithRetries - Reason for the failure {}
    pulsar_1          | 2022-05-24T08:10:27,936+0000 [main] ERROR org.apache.pulsar.PulsarStandaloneStarter - Failed to start pulsar service.
    pulsar_1          | 2022-05-24T08:10:40,966+0000 [client-scheduler-OrderedScheduler-10-0] ERROR org.apache.bookkeeper.clients.impl.internal.RootRangeClientImplWithRetries - Reason for the failure {}
    pulsar_1          | 2022-05-24T08:10:48,250+0000 [main] ERROR org.apache.pulsar.PulsarStandaloneStarter - Failed to start pulsar service.
    

    standalone.sh rocketmq config

    messagingProtocols=rocketmq
    protocolHandlerDirectory=./protocols
    loadManagerClassName=org.apache.pulsar.broker.loadbalance.impl.ModularLoadManagerImpl
    rocketmqListeners=rocketmq://192.168.2.168:9876
    advertisedListeners=INTERNAL:pulsar://127.0.0.1:6650,INTERNAL:pulsar+ssl://127.0.0.1:6651,INTERNAL_ROP:pulsar://127.0.0.1:9876,INTERNAL_ROP:pulsar+ssl://127.0.0.1:9896
    rocketmqListenerPortMap=9876:INTERNAL_ROP
    brokerEntryMetadataInterceptors=org.apache.pulsar.common.intercept.AppendIndexMetadataInterceptor
    brokerDeleteInactiveTopicsEnabled=false
    

    rop version: pulsar-protocol-handler-rocketmq-0.2.0 rocketmq version 4.8

    701bc9ee452c   foxiswho/rocketmq:4.8.0                    "sh mqbroker -c /etc…"   5 days ago   Up 5 days   0.0.0.0:10909->10909/tcp, 9876/tcp, 10912/tcp, 0.0.0.0:10911->10911/tcp   rmqbroker
    e04bdd592495   apacherocketmq/rocketmq-dashboard:latest   "sh -c 'java $JAVA_O…"   5 days ago   Up 5 days   0.0.0.0:8180->8080/tcp                                                    rmqdashboard
    c22e083be74d   foxiswho/rocketmq:4.8.0                    "sh mqnamesrv"           5 days ago   Up 5 days   10909/tcp, 0.0.0.0:9876->9876/tcp, 10911-10912/tcp                        rmqnamesrv
    

    compose.yaml

    version: '3.3'
    
    services:
      pulsar:
        image: "apachepulsar/pulsar:2.10.0"
        command: bin/pulsar standalone
        restart: always
        ports:
          - "6650:6650"
          - "8080:8080"
        volumes:
        #   - ./pulsar-cfg/data:/pulsar/data
          - ./pulsar-cfg/conf:/pulsar/conf
          - ./pulsar-cfg/protocols:/pulsar/protocols:rw
    
      pulsar-manager:
        image: "apachepulsar/pulsar-manager:v0.2.0"
        restart: always
        ports:
          - "9527:9527"
          - "7750:7750"
        depends_on:
          - pulsar
        links:
          - pulsar
        environment:
          - SPRING_CONFIGURATION_FILE=/pulsar-manager/pulsar-manager/application.properties
    
    opened by zengzhengrong 0
Owner
StreamNative
Cloud-Native messaging and event streaming powered by Apache Pulsar
StreamNative
Mirror of Apache RocketMQ

Apache RocketMQ Apache RocketMQ is a distributed messaging and streaming platform with low latency, high performance and reliability, trillion-level c

The Apache Software Foundation 18.5k Dec 28, 2022
Apache Pulsar - distributed pub-sub messaging system

Pulsar is a distributed pub-sub messaging platform with a very flexible messaging model and an intuitive client API. Learn more about Pulsar at https:

The Apache Software Foundation 12.1k Jan 4, 2023
Dagger is an easy-to-use, configuration over code, cloud-native framework built on top of Apache Flink for stateful processing of real-time streaming data.

Dagger Dagger or Data Aggregator is an easy-to-use, configuration over code, cloud-native framework built on top of Apache Flink for stateful processi

Open DataOps Foundation 238 Dec 22, 2022
HornetQ is an open source project to build a multi-protocol, embeddable, very high performance, clustered, asynchronous messaging system.

HornetQ If you need information about the HornetQ project please go to http://community.jboss.org/wiki/HornetQ http://www.jboss.org/hornetq/ This file

HornetQ 245 Dec 3, 2022
MC Protocol specification of the current minecraft release / snapshot. Most useful when developing with ProtocolLib.

Snapshot 1.19.1-pre4 (1.19.1), Protocol 97 (1073741921), Release Protocol: 760 Handshaking (Server -> Client) Handshaking (Client -> Server) 0x00 - Cl

Pasqual Koschmieder 9 Dec 12, 2022
Firehose is an extensible, no-code, and cloud-native service to load real-time streaming data from Kafka to data stores, data lakes, and analytical storage systems.

Firehose - Firehose is an extensible, no-code, and cloud-native service to load real-time streaming data from Kafka to data stores, data lakes, and analytical storage systems.

Open DataOps Foundation 279 Dec 22, 2022
Mirror of Apache Kafka

Apache Kafka See our web site for details on the project. You need to have Java installed. We build and test Apache Kafka with Java 8, 11 and 15. We s

The Apache Software Foundation 23.9k Jan 5, 2023
Apache Camel is an open source integration framework that empowers you to quickly and easily integrate various systems consuming or producing data.

Apache Camel Apache Camel is a powerful, open-source integration framework based on prevalent Enterprise Integration Patterns with powerful bean integ

The Apache Software Foundation 4.7k Dec 31, 2022
Mirror of Apache ActiveMQ

Welcome to Apache ActiveMQ Apache ActiveMQ is a high performance Apache 2.0 licensed Message Broker and JMS 1.1 implementation. Getting Started To hel

The Apache Software Foundation 2.1k Jan 2, 2023
Mirror of Apache ActiveMQ Artemis

ActiveMQ Artemis This file describes some minimum 'stuff one needs to know' to get started coding in this project. Source For details about the modify

The Apache Software Foundation 824 Dec 26, 2022
An XMPP server licensed under the Open Source Apache License.

Openfire About Openfire is a real time collaboration (RTC) server licensed under the Open Source Apache License. It uses the only widely adopted open

Ignite Realtime 2.6k Jan 3, 2023
Kryptonite is a turn-key ready transformation (SMT) for Apache Kafka® Connect to do field-level 🔒 encryption/decryption 🔓 of records. It's an UNOFFICIAL community project.

Kryptonite - An SMT for Kafka Connect Kryptonite is a turn-key ready transformation (SMT) for Apache Kafka® to do field-level encryption/decryption of

Hans-Peter Grahsl 53 Jan 3, 2023
Template for an Apache Flink project.

Minimal Apache Flink Project Template It contains some basic jobs for testing if everything runs smoothly. How to Use This Repository Import this repo

Timo Walther 2 Sep 20, 2022
SpringBoot show case application for reactive-pulsar library (Reactive Streams adapter for Apache Pulsar Java Client)

Reactive Pulsar Client show case application Prerequisites Cloning reactive-pulsar Running this application requires cloning https://github.com/lhotar

Lari Hotari 9 Nov 10, 2022
FLiP: StreamNative: Cloud-Native: Streaming Analytics Using Apache Flink SQL on Apache Pulsar

StreamingAnalyticsUsingFlinkSQL FLiP: StreamNative: Cloud-Native: Streaming Analytics Using Apache Flink SQL on Apache Pulsar Running on NVIDIA XAVIER

Timothy Spann 5 Dec 19, 2021
Titanium is a plugin meant to block harmful packets before they're received by the Minecraft packet handler.

Titanium Titanium is a plugin meant to block harmful packets before they're received by the Minecraft packet handler. Report Bug · Request Feature Tab

Jaden 35 Dec 7, 2022
Mirror of Apache RocketMQ

Apache RocketMQ Apache RocketMQ is a distributed messaging and streaming platform with low latency, high performance and reliability, trillion-level c

The Apache Software Foundation 18.5k Dec 28, 2022
Apache rocketmq

RocketMQ Streams Features 轻量级部署:可以单独部署,也支持集群部署 多种类型的数据输入以及输出,source支持 rocketmq , sink支持db, rocketmq 等 DataStream Example import org.apache.rocketmq.st

The Apache Software Foundation 145 Jan 6, 2023
Apache Pulsar - distributed pub-sub messaging system

Pulsar is a distributed pub-sub messaging platform with a very flexible messaging model and an intuitive client API. Learn more about Pulsar at https:

The Apache Software Foundation 12.1k Jan 4, 2023
source code of the live coding demo for "Building resilient and scalable API backends with Apache Pulsar and Spring Reactive" talk held at ApacheCon@Home 2021

reactive-iot-backend The is the source code of the live coding demo for "Building resilient and scalable API backends with Apache Pulsar and Spring Re

Lari Hotari 4 Jan 13, 2022