If we agree that the architecture of our domain’s data model requires some duplication of data across bounded contexts or even between services within the same context, then we must ensure data consistency.
These divisions will manifest themselves as individual microservices in our online storefront example. Orange represents the logical divisions of responsibility within each bounded context. Security likely holds the customer’s access credentials, account access history, and privacy settings.īelow are the Customer data objects are shown in yellow. Fulfillment may maintain a record of all orders being shipped to the customer. Marketing may possess additional information about the customer’s use of the store’s loyalty program and online shopping activity. For example, the Accounting context may be the system of record for primary customer information, such as the customer’s name, contact information, contact preferences, and billing and shipping addresses. A complete view of the Customer will require you to aggregate data from multiple contexts. Further, we can assume the unique properties that define a Customer are likely to be spread across several bounded contexts. Given this problem domain, we can assume we have the concept of a Customer. Bounded contexts would likely include Shopping, Customer Service, Marketing, Security, Fulfillment, Accounting, and so forth, as shown in the context map, below. Using a domain-driven design (DDD) approach, we would expect our problem domain, the online storefront, to be composed of multiple bounded contexts. To ground the discussion, let’s examine a common example - an online storefront. The post was featured on Pivotal’s RabbitMQ website. I previously covered the topic of eventual consistency in a distributed system using RabbitMQ in the May 2017 post, Eventual Consistency: Decoupling Microservices with Spring AMQP and RabbitMQ. Since being created and open-sourced by LinkedIn in 2011, Kafka has quickly evolved from a messaging queue to a full-fledged event streaming platform.Įventual consistency, according to Wikipedia, is a consistency model used in distributed computing to achieve high availability that informally guarantees that if no new updates are made to a given data item, eventually all accesses to that item will return the last updated value. According to Confluent, initially conceived as a messaging queue, Kafka is based on an abstraction of a distributed commit log. IntroductionĪpache Kafka is an open-source distributed event streaming platform capable of handling trillions of messages. Given this duplication, how do we maintain data consistency? In this two-part post, we will explore one possible solution to this challenge - Apache Kafka and the model of eventual consistency. Given a modern distributed system composed of multiple microservices, each possessing a sub-set of a domain’s aggregate data, the system will almost assuredly have some data duplication. Using Spring for Apache Kafka to manage a Distributed Data Model in MongoDB across multiple microservices