Strangler Fig Pattern Demo
Build applications
Before being able to spin up the docker-compose based demo environment please make sure to successfully build all three projects - an old legacy version of the petclinic monolith based on Spring 3, the Apache Kafka Streams application and the owner microservice powered by Quarkus:
./build-applications.sh
Run with Docker
Spin up the demo environment by means of Docker compose:
docker-compose up
Strangler Fig Proxy
This demo uses nginx and depending on the request, routes either towards the petclinic monolith or the extracted owner microservice.
-
The proxy serves a static index page at
http://localhost
just to verify it's up and running. -
The initial proxy configuration - see
docker/.config/nginx/nginx_initial.conf
- routes all requests starting withhttp://localhost/petclinic
to the monolithic application. -
As a first adaption, the proxy is reconfigured - see
docker/.config/nginx/nginx_read.conf
- to route a specific read request i.e. the owner search from the monolith to the corresponding owner microservice.
NOTE: In order for this to work the CDC setup (from monolith -> microservice) needs to be configured properly based on Apache Kafka Connect (see below)
- As a second adaption, the proxy is reconfigured - see
docker/.config/nginx/nginx_initial.conf
- to route owner edit requests to the owner microservice as well.
NOTE: In order for this to work the CDC setup (from microservice -> monolith) needs to be configured properly based on Apache Kafka Connect (see below)
Proxy Reconfiguration
With the docker compose stack up and running, the proxy can be reconfigured by running the following simple script with one of 3 supported parameters:
./proxy_config.sh initial | read | read_write
Apache Kafka Connect Setup for CDC Pipelines
CDC from monolith (MySQL) -> microservice (MongoDB)
- Create MySQL source connector for owners + pets tables:
http POST http://localhost:8083/connectors/ < register-mysql-source-owners-pets.json
- Create MongoDB sink connector for pre-joined owner with pets aggregates:
http POST http://localhost:8083/connectors/ < register-mongodb-sink-owners-pets.json
Apache Kafka Connect Setup for CDC Pipelines
CDC from microservice (MongoDB) -> monolith (MySQL)
- Before processing writes for owner data at the microservice in MongoDB, update the MySQL source connector from the monolith database to ignore changes happening in the owners table. Otherwise we would try to do CDC for the same data from both sides which would lead to a propagation cycle.
http PUT http://localhost:8083/connectors/petclinic-owners-pets-mysql-src-001/config < update-mysql-source-owners-pets.json
- Create MongoDB source connector to capture data changes in the microservice's data store
http POST http://localhost:8083/connectors/ < register-mongodb-source-owners.json
- Create MySQL JDBC sink connector to propagate changes from the microservice into the monolith's database
http POST http://localhost:8083/connectors/ < register-jdbc-mysql-sink-owners.json
Consume messages from CDC-related Apache Kafka topics
docker run --tty --rm \
--network voxxedromania21-sfp-demo_default \
debezium/tooling:1.1 \
kafkacat -b kafka:9092 -C -t mysql1.petclinic.owners -o beginning -q | jq .
docker run --tty --rm \
--network voxxedromania21-sfp-demo_default \
debezium/tooling:1.1 \
kafkacat -b kafka:9092 -C -t mysql1.petclinic.pets -o beginning -q | jq .
docker run --tty --rm \
--network voxxedromania21-sfp-demo_default \
debezium/tooling:1.1 \
kafkacat -b kafka:9092 -C -t kstreams.owners-with-pets -o beginning -q | jq .
docker run --tty --rm \
--network voxxedromania21-sfp-demo_default \
debezium/tooling:1.1 \
kafkacat -b kafka:9092 -C -t mongodb.petclinic.kstreams.owners-with-pets -o beginning -q | jq .