Improving Performance with Solid-State is Good, but…
As solid-state storage becomes more ubiquitous in shared storage solutions, it’s becoming clear that it has more to offer than just unbridled performance. Don’t get me wrong, that’s all good, but it’s time for some innovation. At this point, most approaches to solid-state have been more about time to market than offering specific solutions. The following are the two common implementation strategies that exemplify this.
1. Solid-state as a tier: “Franken-storage”
First to market was SSD as a bolt-on to traditional disk-based storage architectures. While it served the purpose of quickly getting a product to market, there are shortcomings to this approach. For example, the legacy storage controllers used in these systems are meant for spinning disks, not SSD. They’ll choke on just a few SSDs worth of bandwidth. And although they’ll never admit that this is not an ideal architecture, actions speak louder than words (EMC Buys Flash-Array Startup XtremIO). Performance is further impacted by the use of reactive tiering algorithms to move data between SSD and disks. This means if a workload requires solid-state performance right now, it will have to wait until much later to get it.
2. All solid-state arrays: “Performance for EVERY workload”
The second approach was born from failures of the first. Since the SSD bolt-on boxes were too slow and inefficient, the next batch of vendors introduced all-SSD arrays to the world. To be fair, if money was no object, I think most admins would go all-solid. But, that is not today’s reality. SSD is still the most expensive single component in the storage system. If ALL your data required solid-state performance ALL the time, it might make sense to shell out some serious dough for an all-solid-state array. However, for those on a budget that need a multi-purpose shared storage system with high performance where it’s needed, this might not be the best fit. Oh yea, I forgot to mention that legacy storage controllers are used in the all-SSD arrays as well, which will ensure the SSDs won’t reach their advertised speed.
In contrast, the NexGen Storage approach is focused on “Performance where and when it’s needed”.
We believe that PCIe will be become the standard backplane for future generations of storage architectures, and have developed a PCIe based solid-state storage system comprised of Fusion-io solid-state and Nearline SAS. With solid-state on the PCIe bus, we’re able to leverage all 48x86 processor cores for massive parallel processing. This means the data path has access to the full bandwidth of the entire PCIe bus, as opposed to being limited by a controller card in a single slot.
But, regardless of how you get there, providing solid-state performance is just the first step. From there, it quickly turns to helping customers reduce costs and improve efficiency. For example, how does a solution manage issues that plague most shared storage, such as resource contention?
At NexGen we’re addressing storage performance management with Storage Quality of Service (QoS), which provides visibility to performance resources, allows them to be provisioned just like capacity, and then ensures that performance levels are maintained.
It begins with providing visibility to the available storage performance resources. The NexGen management dashboard presents real-time performance and capacity information so you can make informed decisions about how to provision performance. You heard me correctly, provision performance! NexGen has solved a problem as old a shared storage itself, by providing the ability to provision performance just like capacity.
When a NexGen volume is provisioned, both capacity and performance are defined in the same dialogue box. Each volume is assigned one of three QoS levels (Mission-Critical, Business-Critical and Non-Critical) that define IOPS, throughput and latency. A volume’s QoS setting defines the solid-state to SAS ratio of that volume. Once the volume is created, QoS becomes responsible for maintaining its performance levels.
QoS is not a constraining force. When the seas are calm, performance resources are available to any volume, including lower priority workloads like file shares (Non-Critical). However, if Exchange (Mission-Critical) gets hungry for performance, QoS will guarantee the performance you defined, at the expense of the lesser QoS levels.
QoS is just one facet of the NexGen solution, and is an underlying feature of our entire product. For instance, it ensures that Dynamic Data Placement (real-time tiering) and Phased Data Reduction (real-time dedupe) don’t impact performance.
So, not only can we deliver solid-state performance at a significantly lower $/GB than 15K disk drives, we can ensure that the applications that need performance get it, and low priority workloads don’t interfere. This provides maximum consolidation and helps customers avoid creating solid-state silos and the migration challenges that follow.