This article explores working code demo to showcase the concepts discussed in Part 1.
I had discussed two approaches for use cases 2 and 3. The code shared in this article showcases both approaches so that you can understand the pros and cons better.
For Approach 1, I’ve not applied a distributed lock so that I can demonstrate the potential data integrity issues when individual clients attempt to update a shared object.
Please note that the code is a very rudimentary timeline management service, missing many use cases. It’s meant to showcase how a distributed cache enables atomic operations across a cluster in a very efficient manner.
The code example uses java data objects just to keep it simple, but in a system targeting high throughput there’s a very good probability that the data format string, e.g., json string, is passed along the pipeline to eliminate Object SerDe processing.
Read the full blog here: