Skip to content

Latest commit

 

History

History
107 lines (72 loc) · 4.44 KB

README.adoc

File metadata and controls

107 lines (72 loc) · 4.44 KB

Building Cloud-Native Data-Intensive Applications with Spring

A use-case demonstration of data-intensive applications built using Spring Cloud Stream, Apache Kafka, and Kafka Streams.

The use-case revolves around the User domain object. In Domain Driven Design terms, it is the Aggregate for this use-case.

The commands and queries for the aggregate can be found in the domain class. Create, Activate, Name Change, and Deactivate are the behaviors of this aggregate.

Note
This demonstration builds upon the literature from Kenny and Jakub’s SprineOne-2017 presentation.

Projects included in the demo:

  • userproducer: A Spring Cloud Stream "producer" App that generates new users and it updates the aggregate state on a random frequency. Each payload is represented as DomainEvent.

  • userconsumer: A Spring Cloud Stream "consumer" App that interacts with DomainEvent (payload exchanged between two bounded contexts).

    • The App includes multiple @StreamListener functions. One consumes new payloads, and in turn, it triggers downstream events and the other interacts with KStream APIs to perform stateful operations.

    • The App also includes a @RestController to interact with KStream events through KTable via InteractiveQueries.

    • A single-page-application presents the real-time events using Google Maps, Timeline, and Gauge charts.

Run it Locally

Clone the repo and run the Maven build.

mvn clean -U install

Steps

1) Start Kafka 2.0 (and ZooKeeper)

2) Start the userconsumer App:

java -jar userconsumer/target/user-consumer-0.0.1-SNAPSHOT.jar --server.port=9002

3) Download the Websocket-sink and stat the websocket-sink App:

java -jar websocket-sink-kafka-10-1.3.1.RELEASE.jar --spring.cloud.stream.bindings.input.destination=userscount --spring.cloud.stream.bindings.input.contentType=text/plain --spring.cloud.stream.bindings.input.consumer.headerMode=raw

4) Launch the single-page-application at: http://localhost:9002/users.html

5) Start the userproducer App: (yeah, as the last step, because it is better to have consumers started and ready so it will be easy to reason through when new events arrive from the producer)

java -jar userproducer/target/user-producer-0.0.1-SNAPSHOT.jar --server.port=9001

As soon as userproducer starts to generate new users, all the new events will go through Kafka, and they will be consumed automatically by the userconsumer App. The UI will update the map, timeline, and gauge charts in real-time.

Run it on Kubernetes

The same set of applications can be deployed on Kubernetes. Spring Cloud Data Flow (SCDF) exactly helps us with the orchestration and deployment needs. It is much simpler to deploy it as a stream im SCDF, so there’s the opportunity to centrally manage, visualize, and monitor all the moving pieces.

Prerequisites

1) A Kubernetes cluster (1.9.x or later).

2) Helm install Spring Cloud Data Flow.

3) Create a Websocket and Consumer LoadBalancer.

Steps

1) Clone the repo, run the Maven build, and generate Docker images.

mvn package dockerfile:build -pl userproducer
mvn package dockerfile:build -pl userconsumer

2) Publish the Docker images to Docker Hub.

docker push /user-producer:0.0.1-SNAPSHOT
docker push /user-consumer:0.0.1-SNAPSHOT

3) Register the Apps in SCDF.

4) Create and deploy the following stream.

5) Confirm deployment and verify that the pods are up and running.

4) Launch the single-page-application at: http://<CONSUMER_LB_HOST>/users.html