Cache: You Can Have Too Much of a Good Thing…
There’s a lot of discussion in the storage industry around how to implement solid-state or flash storage. Most of the debate centers on whether it should be as a tier of storage or as a cache. So what’s the difference?
A tier of storage can be generally summed up as a type of storage defined by price, performance, capacity, availability or function that does not require data to be stored elsewhere for it to work properly.
Cache, according to Wikipedia, is “is a component that transparently stores data so that future requests for that data can be served faster” which is composed of “duplicates of original values that are stored elsewhere.”
The critical difference between a storage tier and cache is that a storage tier is self-contained and does not require duplicates of data stored elsewhere. The broader implication is that cache decreases capacity utilization and drives up the usable $/GB of the system. Does that mean it’s a bad thing? Absolutely not. But it does mean you need to balance the performance benefits derived from cache with the overall usable $/GB of the system.
If you listen to how solid-state is being implemented by most storage vendors, they treat it like a read cache. EMC VFCache and NetApp Flash Cache are two good examples of this strategy. The thing they got right is that both are implemented as PCIe to avoid RAID and storage controller bottlenecks. However, two big glaring issues are:
- Using solid-state exclusively as a cache is extremely expensive because you still need additional capacity to store the “original” data.
- The inability to write to solid-state means that performance benefits are isolated to read intensive workloads. Topic for my next blog.
I’m not saying solid-state cache is a bad thing; it can work very well for specific problems like heavy read-intensive workloads like VDI boot storms. But it’s a very expensive approach and not a complete solution. If you’re paying a lot of money to address VDI boot storms, you better have a solution for VDI virus scans and patches as well (which are extremely write intensive) or you’ve just paid a lot of money to address only one of the challenges.
To avoid these issues and unleash the full potential of solid-state, NexGen uses two PCIe solid-state cards from Fusion-io which are used as a transitory mirror. This allows the NexGen n5 Storage System to store all writes in solid-state which are protected by the mirror. Then, what I mean by the term “transitory” is that once the write has been acknowledged back to the host the redundant copy is quickly moved from solid-state down to disk, to avoid keeping redundant data in solid-state, the most expensive resource in the system. After that, NexGen’s Dynamic Data Placement makes real-time decisions about what data stays in solid-state and what data gets evicted. To recap:
- Solid-state is used as a tier to store writes (protected by a transitory mirror) as they come in the system.
- Solid-state operates as a “read cache” for frequently accessed data.
- Infrequently accessed data is evicted from solid-state and stored exclusively on low cost disk.
We do pay the capacity utilization penalty with our read cache just like everyone else, but the key here is that it is only one aspect of the overall implementation of solid-state. We strive to ensure our system maintains balance between performance and capacity, read and write performance, caching and tiering. This balanced approach results in a much more efficient storage system that delivers lower $/IOP and $/GB in a smaller footprint.
Solid-state cache isn’t a bad thing, but just like most things in life, moderation is the best approach.