Distributed caches are an essential layer between the application and persistent storage in the current generation of applications, dealing with large amounts of data. Irrespective of how well the databases and file systems scale, they will still be slower than what the application expects, especially for high visibility applications where the responsiveness is taken for granted.
Therefore, these apps typically save data into an “eventually consistent” data layer and over that, add a distributed cache layer that holds “immediately required” information by the app to facilitate its “responsiveness.” It’s also possible that you might have multiple distributed cache clusters, focused on their specific datasets and operations around them.
Let’s look at a simplified view of how a distributed cache works. The cache distributes the data across nodes of the cluster. The data object is replicated across the cluster, you’ll have the master copy in one node and maybe one or two backup copies in other nodes. This replication, between master and backup, can also occur asynchronously, but typically a Distributed cache synchronous replication is applied for 1 backup atleast.
Distributed caches (dist cache) can hold different types of information which includes:
Use case 1 mentioned above is the typical use case across majority of the installations.
When a single process updates the dataset, there is no requirement to write synchronization. This is the case of (1). It’s straightforward and simple. This is similar in concept to a single write thread and multiple read threads. The only point to handle is that while the write occurs, all reads should block or use the snapshot prior to the in-progress write operation. Typically, distributed cache implementations will transparently handle the write/read sequencing.
But for use cases 2 and 3, the question that arises is “who performs this atomic update?”
Possible update scenarios for use cases 2 and 3 are:
I’ll strongly recommend you read the documentation to better understand the features and the configuration elements available. A distributed cache which is not tuned correctly to the use case, can introduce latency into the application.
For example, while using the Hazelcast EntryProcessor, it’s best to use the OBJECT storage format instead of Binary. Otherwise, every EntryProcessor update requires SerDe processing to convert to Object for the update and then back to serialized form for storage. Another optimization is to ensure that the EntryProcessor does not return a response if the client does not require the result.
I hope you have found this article insightful. Now, get ready to deep dive into a code demo that showcases what we’ve discussed here. That’s Part 2 of this article – Stay tuned!