Apache Camel is an open source integration framework that empowers you to quickly and easily integrate various systems consuming or producing data.

Overview

Apache Camel

Maven Central Javadocs Stack Overflow Chat Twitter

Apache Camel is a powerful, open-source integration framework based on prevalent Enterprise Integration Patterns with powerful bean integration.

Introduction

Camel enables the creation of the Enterprise Integration Patterns to implement routing and mediation rules in either a Java-based Domain Specific Language (or Fluent API), via Spring or Blueprint based Xml Configuration files, or via the Scala DSL. That means you get smart completion of routing rules in your IDE whether in your Java, Scala, or XML editor.

Apache Camel uses URIs to enable easier integration with all kinds of transport or messaging model including HTTP, ActiveMQ, JMS, JBI, SCA, MINA or CXF together with working with pluggable Data Format options. Apache Camel is a small library that has minimal dependencies for easy embedding in any Java application. Apache Camel lets you work with the same API regardless of the transport type, making it possible to interact with all the components provided out-of-the-box, with a good understanding of the API.

Apache Camel has powerful Bean Binding and integrated seamlessly with popular frameworks such as Spring, CDI, and Blueprint.

Apache Camel has extensive testing support allowing you to easily unit test your routes.

Components

Apache Camel comes alongside several artifacts with components, data formats, languages, and kinds. The up to date list is available online at the Camel website:

Examples

Apache Camel comes with many examples. The up to date list is available online at GitHub:

Getting Started

To help you get started, try the following links:

Getting Started

http://camel.apache.org/getting-started.html

The beginner examples are another powerful alternative pathway for getting started with Apache Camel.

Building

http://camel.apache.org/building.html

Contributions

We welcome all kinds of contributions, the details of which are specified here:

https://github.com/apache/camel/blob/master/CONTRIBUTING.md

Please refer to the website for details of finding the issue tracker, email lists, GitHub, chat

Website: http://camel.apache.org/

Github (source): https://github.com/apache/camel

Issue tracker: https://issues.apache.org/jira/projects/CAMEL

Mailing-list: http://camel.apache.org/mailing-lists.html

Chat: https://camel.zulipchat.com/

StackOverflow: https://stackoverflow.com/questions/tagged/apache-camel

Twitter: https://twitter.com/ApacheCamel

Support

For additional help, support, we recommend referencing this page first:

http://camel.apache.org/support.html

Getting Help

If you get stuck somewhere, please feel free to reach out to us on either StackOverflow, Chat, or the email mailing list.

Please help us make Apache Camel better - we appreciate any feedback you may have.

Enjoy!


The Camel riders!

Licensing

The terms for software licensing are detailed in the LICENSE.txt file,
located in the working directory.

This distribution includes cryptographic software. The country in which you currently reside may levy restrictions on the import, possession, use, and re-export to foreign countries of encryption software. BEFORE using any encryption software, please check your country's laws, regulations, and policies concerning the import, possession, or use, and re-export of encryption software, to see if this is permitted. See http://www.wassenaar.org/ for more information.

The U.S. Government Department of Commerce, Bureau of Industry and Security (BIS) has classified this software as Export Commodity Control Number (ECCN) 5D002.C.1, which includes information security software using or performing cryptographic functions with asymmetric algorithms. The form and manner of this Apache Software Foundation distribution makes it eligible for export under the License Exception ENC Technology Software Unrestricted (TSU) exception (see the BIS Export Administration Regulations, Section 740.13) for both object code and source code.

The following provides more details on the included cryptographic software:

  • camel-ahc can be configured to use https.
  • camel-atmosphere-websocket can be used for secure communications.
  • camel-crypto can be used for secure communications.
  • camel-cxf can be configured for secure communications.
  • camel-ftp can be configured for secure communications.
  • camel-http can be configured to use https.
  • camel-infinispan can be configured for secure communications.
  • camel-jasypt can be used for secure communications.
  • camel-jetty can be configured to use https.
  • camel-mail can be configured for secure communications.
  • camel-nagios can be configured for secure communications.
  • camel-netty-http can be configured to use https.
  • camel-undertow can be configured to use https.
  • camel-xmlsecurity can be configured for secure communications.
Comments
  • CAMEL-18661: Make span current and clear scope properly for async processing

    CAMEL-18661: Make span current and clear scope properly for async processing

    Fixesr https://issues.apache.org/jira/browse/CAMEL-18661

    ~~!Looking for input on the overall approach!~~

    This change enables current span propagation to underlying libraries and end-user-code (for OpenTelemetry): for all spans created by Camel, it calls into span.makeCurrent()

    However, OpenTelemetry (and other tracing tools) rely on ThreadLocal to propagate context. They do it carefully for Executors, Reactor, etc, so it works in async scenarios too.

    Instrumentations have to close scope returned by span.makeCurrent() on the same thread where it was created to avoid leaking context (by leaving current span on current thread).

    This is easy to guarantee with sync operations in Camel - ExchangeStarted/Sending and ExchangeCompleted/Sent events (where spans start and end) are called on the same thread.

    However, for async operations, they are called on different threads.

    This change also adds a new event - ExchangeAsyncStarted (naming needs more work), that notifies that Processor.process call has ended, i.e. async operation has started. Tracers would end the scope (but not span) when this event is sent.

    Alternative approach:

    add a dependency on camel-tracing to camel-base-engine and call into ActiveSpanManager directly:

    try(AutoCloseable scope = ActiveSpanManager.makeSpanCurrent(exchange)) {
        sync = processor.process(exchange, async);
    }
    

    This would be easier to maintain and would more or less guarantee that scope is disposed on the same thread, however, dependency on tracing might not be desirable.

    Traces with this change

    (tick/testme spans come from Camel and were previously not correlated with ServiceBus spans coming from Azure SDK)

    image components core 
    opened by lmolkova 39
  • CAMEL-11261 Revise Camel context destruction in Spring (Boot) applications

    CAMEL-11261 Revise Camel context destruction in Spring (Boot) applications

    Submitted for review, thanks 👍

    See the discussion on the user forum.

    This makes CamelSpringBootApplicationController lifecycle bean with top priority during ApplicationContext close.

    opened by zregvart 37
  • CAMEL-13807: Add Component DSL fluent builders

    CAMEL-13807: Add Component DSL fluent builders

    This PR implements components DSL according to CAMEL-13807. Few notes:

    1. To use it to create a component, it can be used like this ComponentsBuilderFactory.kafka().setBrokers("{{host:port}}").build()
    2. Since I needed to maintain the POM file and ComponentsBuilderFactory interface, I used a metadata json file to keep track on the generated DSL classes and update whatever needed.
    3. To set the properties, I used component property configurer since it generates what is needed and I made use of that.
    4. Since I used the metadata file approach, the build speed wasn't impacted much which is good.
    5. If the placeholder needs to be resolved, the component needs to be built with the context being passed as parameter: ComponentsBuilderFactory.kafka().setBrokers("{{host:port}}").build(camelContext)
    6. There are a lot of shared functionality between endpoinsDslMojo and componentsDslMojo, hence I have created an abstract class for both and helpers that can be shared in both. However, I left EndpointDslMojo untouched for now. Will remove some shared core once I am done from this PR.

    Topics need to be done:

    • [x] Fix checkstyle.
    • [x] Add unit tests for the maven plugin generators.
    • [x] Add more unit tests for camel-componentDsl.
    • [x] Optimize some classes on the go.
    opened by omarsmak 34
  • CAMEL-10178: Added Google PubSub Component

    CAMEL-10178: Added Google PubSub Component

    Google PubSub component with producer and consumer endpoints, unit and integration tests. Speed optimisation (mini batching, async processing) will be done in subsequent releases - dependent on the new client library from Google. For now it is synchronous, message by message processing.

    opened by evmin 33
  • CAMEL-11879: Upgraded lucene version to 7.0.0

    CAMEL-11879: Upgraded lucene version to 7.0.0

    https://issues.apache.org/jira/browse/CAMEL-11879 Upgraded Lucene to latest version 7.0.0.

    Passed the iTests for Karaf and OSGi without issues, including the camel-lucene component tests.

    opened by vrlgohel 32
  • CAMEL-14910: Bundling of the heavily distributed components

    CAMEL-14910: Bundling of the heavily distributed components

    • Particular queries were raised in this issue.
    1. Should there be summary pages for groups of related components? There already existed summary pages for groups including Ignite, Spring, and a few others. However, as this brings in clarification to understand the documentation, I also added a summary for other groups including AWS, AWS 2, Google.

    2. What should their structure and appearance be? In the nav panel, I believe that the highly distributed related components should be bundled and grouped together within the parent component.

    3. Should the nav pane indent the components in a group? As for user manual documentation under camel Kafka and others, the nav pane is well indented with subgroups and parent groups. Thus, it has been done in the same manner for this PR.

    opened by AemieJ 30
  • [CAMEL-9145] update hbase version to 1.1.1 and hadoop2 to 2.7.1

    [CAMEL-9145] update hbase version to 1.1.1 and hadoop2 to 2.7.1

    Before the upgrade I was not able to run tests properly, because of an error during setup of HadoopMinicluster. In that case all tests pass, but are not started. I have a linux environment.

    After the update I was able to run tests. There are 18 positive results and 2 negative in my environment. I cannot check how many negative tests were on the previous versions, because of the error mentioned at the beginning.

    opened by woj-i 30
  • Camel 13742 Extend Camel-cmis with new operations

    Camel 13742 Extend Camel-cmis with new operations

    Operations :

    • Delete folder
    • Delete document
    • Move folder
    • Move document
    • Rename folder
    • Rename document
    • Copy folder
    • Copy document
    • CheckIn
    • CheckOut
    • CancelCheckOut

    I have to mention that I am not able to build the original apache camel project. That means I cant run the test written by the author of Camel-cmis. I was working only on Camel-cmis module in a separate project. I was testing all new operations with alfresco repository.

    opened by cherepnalkovski 29
  • ✨ camel-whatsapp component

    ✨ camel-whatsapp component

    • [ ] Make sure there is a JIRA issue filed for the change (usually before you start working on it). Trivial changes like typos do not require a JIRA issue. Your pull request should address just this issue, without pulling in other changes.
    • [ ] Each commit in the pull request should have a meaningful subject line and body.
    • [ ] If you're unsure, you can format the pull request title like [CAMEL-XXX] Fixes bug in camel-file component, where you replace CAMEL-XXX with the appropriate JIRA issue.
    • [ ] Write a pull request description that is detailed enough to understand what the pull request does, how, and why.
    • [ ] Run mvn clean install -Psourcecheck in your module with source check enabled to make sure basic checks pass and there are no checkstyle violations. A more thorough check will be performed on your pull request automatically. Below are the contribution guidelines: https://github.com/apache/camel/blob/main/CONTRIBUTING.md

    Hello, since WhatsApp business released new public API, https://developers.facebook.com/docs/whatsapp/cloud-api/, I created a new component camel-whatsapp which integrate camel with those API (pretty much like telegram).

    As you can see from the integration test, the producer is working as expected, but before adding more features, documentation and so on I'd like to have your feedback if it is worth to have this component.

    Moreover I have some question:

    • I'm using async-http-client as HTTP client, but seems like that the project is not maintained anymore, should I use another library?
    • As you can see from cloud-api documentation, webhook can be used, I was wondering if camel-webhook can be used and if I could receive some help

    The output of the test right now is:

    image

    components catalog 
    opened by Croway 28
  • CAMEL-10671: Adding Camel example project for the Ceylon JVM language

    CAMEL-10671: Adding Camel example project for the Ceylon JVM language

    Adding Camel example for Ceylon 1.3.3.

    Followed the path of the Kotlin example as instructed on the ticket.

    Compile/Runs without errors and I'm able to access Jetty's localhost:8080 for the welcome message.

    Things I couldn't get around:

    1. The module.ceylon file cannot refer to Maven POM versions/properties unfortunately needing a hardcoded version of the used libs (double-checked on the Ceylon Gitter channel too)
    2. I had to hardcode the version of camel-http-common since the 2.22.0-SNAPSHOT does not exist
    opened by dimitrisli 28
  • Camel Firebase Component

    Camel Firebase Component

    This component should allow to use Google Firebase (see https://firebase.google.com/docs/admin/setup ) from Camel. It contains an end point with a Producer and a Consumer. The producer can create and update values in Firebase. The consumer listens to child events and can consume add, change, remove, move and cancel operations.

    You will find some integration unit tests in this first commit. But these tests will need to have a JSON file ('firebase-admin-connection.json') with the Firebase keys so that the tests can run. For security reasons I have removed these. Not quite sure how to solve that.

    I am a newbie on this project, so it might well be that I am not complying to your standards. Please guide me where needed.

    opened by gilfernandes 27
  • [CAMEL-5963] camel-smpp: add Transceiver (TRX) support

    [CAMEL-5963] camel-smpp: add Transceiver (TRX) support

    Fixes CAMEL-5963 via adding a new producer specific uri param so called messageReceiverRouteId which is however optional and so backward compatible (i.e. if not set then previous TX behavior will be seen exactly as before). If set then the user wouldn't define a corresponding redundant consumer. Camel will use the specified route above as corresponding consumer - internally it uses one and same SmppSession as producer. Example:

                    from("direct:start")
                            .to("smpp://j@localhost:8056?password=jpwd&systemType=producer" +
                                "&messageReceiverRouteId=testMessageReceiverRouteId");
    
                    from("direct:messageReceiver").id("testMessageReceiverRouteId")
                            .to("mock:result");
    

    CamelContext startup would fail in the first place on a wrong messageReceiverRouteId value via registering a StartupListener when no route can be found for the specified id. This has integrated tested in org/apache/camel/component/smpp/integration/SmppTRXProducerIT.java

    NOTE: When user's SMPP Server doesn't support TRX, still possible and she/he has to define separate producer (TX by default) and consumer (unchanged and RX by default) as before.

    components 
    opened by yasserzamani 3
  • Problem when parameter type of bean method is String.

    Problem when parameter type of bean method is String.

    [camel-bean] Problem when parameter type of bean method is String. Problems with more than 2 parameters.

    Expression to evaluate the bean parameter parameters and provide the correct values when the method is invoked.

    There are several problems:

    1. There is a problem with quotes being removed in the splitSafeQuote method used inside the evaluate method. splitSafeQuote("'Camel', 'Test'", ",", true) -> ["Camel","Test"]
    2. The quote is removed and false is returned from the isValidParameterValue method. isValidParameterValue("Camel") -> return false
    3. When “ClassLoader” is working which make thread blocked it caused by Disk I/O. So it behavior make that performance deteriorates.

    This PR also adds TestCase.

    components core 
    opened by Luke-hbk 11
  • Adding HTTPs support to AS2 component

    Adding HTTPs support to AS2 component

    Trying to add SSLContext option to AS2Configuration.

    I added a new SSLContext field to AS2Configuration class and got "Unable to create mojo: Empty doc for option: sslContext, parent options: " error while compiling. Following the contribution page, I have already added javadoc comment before the setter method (didn't see it before any getter method of other fields so skipped it for getter) and @UriParam for the field declaration.

    (I had written code for adding the SSLContext configuration at component and endpoint level, but keeping the code minimal here. Also changed SSLContext to a String. Getting the same error though)

    components 
    opened by shikhar97gupta 9
  • WIP [CAMEL-18698] Add support for multiple input/output data types on components

    WIP [CAMEL-18698] Add support for multiple input/output data types on components

    WIP do not merge!

    This PR is supposed to be a work in progress seeking for guidance and help on implementing multiple data types on components feature as described in https://issues.apache.org/jira/browse/CAMEL-18698

    The changes made in this PR should be seen as a POC starting some discussions and refinements of the feature.

    The idea is to let the user choose a specific data type that the component is producing as an output. The component itself may offer specific data transformation logic that gets automatically triggered when this data type is explicitly used on an endpoint URI:

    from("kafka:topic?format=avro").to("aws2-s3:bucketName?format=string")
    

    The format option tells the component to apply the specific data type as an input or output. The data type may be directly supported by the component or it may be a generic data type that makes use of the Camel TypeConverter logic to transform from one data to another.

    In the PR the feature is implemented as a POC for aws2-ddb and aws2-s3 components. The aws2 components often use Java POJO domain objects as input and output so the user needs to know the domain model when interacting with the components.

    The idea is to also add support for more generic data types on these components. As an example the user may also provide a generic Json structure as an input instead of the AWS domain object. The component takes care on transforming the data type to the required Java POJO used in the AWS client libraries.

    On the output side the aws2-s3 component by default produces byte[] or InputStream output. With the addition of data types the user is able to request another data type such as String. The component makes sure to apply transformation logic from the AWS domain model InputStream implementation to a String.

    This feature is beneficial for Camel users that do not have to know about domain specific Java types anymore. Also declarative Camel DSL use cases e.g. used in Camel K and Kamelets benefit from the auto data type conversion as discussed in https://github.com/apache/camel-k/issues/1980

    A Kamelet may use the provided data types information and expose this as part of the Kamelet specification.

    apiVersion: camel.apache.org/v1alpha1
    kind: Kamelet
    metadata:
      name: aws-s3-source
    spec:
      definition:
        title: "AWS S3 Source"
      properties:
        ...
      types:
        out:
          default: binary
          binary:
            mediaType: application/octet-stream
          stream:
            mediaType: application/octet-stream
          json:
            mediaType: application/json
            schema:
              type: object
              required:
                - key
                - fileContent
              properties:
                key:
                  title: Key
                  description: The S3 file name key
                  type: string
                fileContent:
                  title: Content
                  description: The S3 file content as String
                  type: string
            dependencies:
              - "camel:jackson"
          string:
            mediaType: plain/text
          avro:
            mediaType: application/avro
    

    Also the user is able to choose the input/output data type in a binding:

    apiVersion: camel.apache.org/v1alpha1
    kind: KameletBinding
    metadata:
      name: aws-s3-uri-binding
    spec:
      source:
        ref:
          kind: Kamelet
          apiVersion: camel.apache.org/v1alpha1
          name: aws-s3-source
        output:
          type: avro
          schema:
            uri: http://schema-registry/person
        properties:
          outputFormat: avro
          bucketNameOrArn: ${aws.s3.bucketNameOrArn}
      sink:
        uri: log:info
    

    The new data types implementation provided in this PR use a specific SPI annotation @DataType. A resolver mechanism DefaultDataTypeResolver is capable of doing a lookup of the respective data type implementation with given component scheme and a format name that identifies a data type.

    The PR definitely needs more polishing and has some ToDos in the code that seek for guidance and help how to do it the Camel way (e.g. how to do an automatic resource lookup for data type implementations provided by components).

    Also it would be nice to have some guidance on the already existing org.apache.camel.spi.DataType and how that could be related to this implementation.

    Many thanks in advance!

    components core tooling components-aws 
    opened by christophd 2
  • CAMEL-18341 - Upgrade from Codehaus Groovy 3.0.12 to Apache Groovy 4.0.4

    CAMEL-18341 - Upgrade from Codehaus Groovy 3.0.12 to Apache Groovy 4.0.4

    Note that groupid of Maven artifact has been modified but not old package names. Thus some package named org.apache.groovy.* have been created.

    it requires to upgrade Spock to a Milestone version.

    locally launched tests successfully for:

    • component camel-groovy
    • component camel-grape
    • camel-catalog
    • camel-groovy-dsl
    components catalog dsl tooling tooling-maven 
    opened by apupier 2
Owner
The Apache Software Foundation
The Apache Software Foundation
Open data platform based on flink. Now scaleph is supporting data integration with seatunnel on flink

scaleph The Scaleph project features data integration, develop, job schedule and orchestration and trys to provide one-stop data platform for develope

null 151 Jan 3, 2023
SeaTunnel is a distributed, high-performance data integration platform for the synchronization and transformation of massive data (offline & real-time).

SeaTunnel SeaTunnel was formerly named Waterdrop , and renamed SeaTunnel since October 12, 2021. SeaTunnel is a very easy-to-use ultra-high-performanc

The Apache Software Foundation 4.4k Jan 2, 2023
An XMPP server licensed under the Open Source Apache License.

Openfire About Openfire is a real time collaboration (RTC) server licensed under the Open Source Apache License. It uses the only widely adopted open

Ignite Realtime 2.6k Jan 3, 2023
Dagger is an easy-to-use, configuration over code, cloud-native framework built on top of Apache Flink for stateful processing of real-time streaming data.

Dagger Dagger or Data Aggregator is an easy-to-use, configuration over code, cloud-native framework built on top of Apache Flink for stateful processi

Open DataOps Foundation 238 Dec 22, 2022
An easy-to-use wrapper for many storage systems.

Data Store An easy-to-use wrapper for redis' cached storage system. (support for more data types coming soon) Note: This project is unfinished, and th

Subham 4 Jul 17, 2022
FLiP: StreamNative: Cloud-Native: Streaming Analytics Using Apache Flink SQL on Apache Pulsar

StreamingAnalyticsUsingFlinkSQL FLiP: StreamNative: Cloud-Native: Streaming Analytics Using Apache Flink SQL on Apache Pulsar Running on NVIDIA XAVIER

Timothy Spann 5 Dec 19, 2021
A modular and portable open source XMPP client library written in Java for Android and Java (SE) VMs

Smack About Smack is an open source, highly modular, easy to use, XMPP client library written in Java for Java SE compatible JVMs and Android. A pure

Ignite Realtime 2.3k Dec 28, 2022
Carbyne Stack MP-SPDZ Integration Utilities

Carbyne Stack MP-SPDZ Integration Utilities This project provides utilities for using MP-SPDZ in the Carbyne Stack microservices. License Carbyne Stac

Carbyne Stack 5 Oct 15, 2022
HornetQ is an open source project to build a multi-protocol, embeddable, very high performance, clustered, asynchronous messaging system.

HornetQ If you need information about the HornetQ project please go to http://community.jboss.org/wiki/HornetQ http://www.jboss.org/hornetq/ This file

HornetQ 245 Dec 3, 2022
An Open-Source, Distributed MQTT Message Broker for IoT.

MMQ broker MMQ broker 是一款完全开源,高度可伸缩,高可用的分布式 MQTT 消息服务器,适用于 IoT、M2M 和移动应用程序。 MMQ broker 完整支持MQTT V3.1 和 V3.1.1。 安装 MMQ broker 是跨平台的,支持 Linux、Unix、macOS

Solley 60 Dec 15, 2022
An example Twitch.tv bot that allows you to manage channel rewards (without requiring a message), and chat messages.

Twitch Bot Example shit code that can be used as a template for a twitch bot that takes advantage of channel rewards (that dont require text input) an

Evan 3 Nov 3, 2022
Mirror of Apache Kafka

Apache Kafka See our web site for details on the project. You need to have Java installed. We build and test Apache Kafka with Java 8, 11 and 15. We s

The Apache Software Foundation 23.9k Jan 5, 2023
Mirror of Apache RocketMQ

Apache RocketMQ Apache RocketMQ is a distributed messaging and streaming platform with low latency, high performance and reliability, trillion-level c

The Apache Software Foundation 18.5k Dec 28, 2022
Apache Pulsar - distributed pub-sub messaging system

Pulsar is a distributed pub-sub messaging platform with a very flexible messaging model and an intuitive client API. Learn more about Pulsar at https:

The Apache Software Foundation 12.1k Jan 4, 2023
Mirror of Apache ActiveMQ

Welcome to Apache ActiveMQ Apache ActiveMQ is a high performance Apache 2.0 licensed Message Broker and JMS 1.1 implementation. Getting Started To hel

The Apache Software Foundation 2.1k Jan 2, 2023
Mirror of Apache ActiveMQ Artemis

ActiveMQ Artemis This file describes some minimum 'stuff one needs to know' to get started coding in this project. Source For details about the modify

The Apache Software Foundation 824 Dec 26, 2022
Kryptonite is a turn-key ready transformation (SMT) for Apache Kafka® Connect to do field-level 🔒 encryption/decryption 🔓 of records. It's an UNOFFICIAL community project.

Kryptonite - An SMT for Kafka Connect Kryptonite is a turn-key ready transformation (SMT) for Apache Kafka® to do field-level encryption/decryption of

Hans-Peter Grahsl 53 Jan 3, 2023
RocketMQ-on-Pulsar - A protocol handler that brings native RocketMQ protocol to Apache Pulsar

RocketMQ on Pulsar(RoP) RoP stands for RocketMQ on Pulsar. Rop broker supports RocketMQ-4.6.1 protocol, and is backed by Pulsar. RoP is implemented as

StreamNative 88 Jan 4, 2023
Template for an Apache Flink project.

Minimal Apache Flink Project Template It contains some basic jobs for testing if everything runs smoothly. How to Use This Repository Import this repo

Timo Walther 2 Sep 20, 2022