Recreating Problems of the Past
All-solid-state arrays are chipping away at the bottlenecks exposed in traditional storage architectures, but they’re also creating new problems. In reality, these are problems that have been solved previously but are resurfacing with this new technology. For example:
1. Cost: It’s a fact that solid-state is more expensive than disk (disk = $1.11/GB, solid state = $12.36 on 6/7/12 at hp.com). Waiting for solid-state costs to come down to disk levels isn’t an option for people buying storage today. The argument that compression and dedupe can reduce $/GB down to disk levels is weak because those technologies are not exclusive to all-solid-state arrays. Compression and dedupe can be applied to any type of storage array. Any reductions in $/GB seen in all-solid-state arrays will be mimicked in other arrays with much lower starting $/GB levels.
Another often overlooked issue is entry price. To provide an adequate level of capacity, most all-solid-state arrays have entry prices well above $200,00. This is out of reach for the bulk of customers (86% of them according to IDC). Other vendors try to cut corners and offer lower priced all-solid-state by avoiding hardware redundancy. These systems are not enterprise class and have poor capacity utilization, making scalability an issue.
2. Complexity. Remember Sepaton (‘no tapes’ spelled backwards)? Tape is dead, right? End users may want to avoid talking about tape, but it’s still roughly a $2B industry. The reason it’s still around is that there are types of data for which tape makes the most business sense. Solid-state doesn’t change this. Taking a realistic approach means disk will still be around – for a very, very long time. So if you purchase a new all-solid-state array, you won’t replace all of your disk-based systems. That’s an additional management silo that means more complexity and more time.
3. Migration. Not all data needs solid-state performance. VMware uses a rule of thumb that 90 percent of storage I/O in most virtual environments is created by 10 percent of the data. IDC data backs this up; they predict that capacity for I/O-intensive data will make up less than 7 percent of total capacity by 2014. And if solid-state is more expensive than disk and data doesn’t require solid-state performance, why would you want to store it there? That creates the third key problem for all-solid-state arrays – migration. How will end users migrate data from an all-solid-state array? Backup software? Storage virtualization? Both require a step back in time, more management complexity, and lots of money.
All-solid-state arrays work great for the 7 percent of I/O-intensive workloads. But because of the three problems that all-solid-state arrays are recreating, they aren’t practical solutions for multi-purpose storage environments like those used to support VMware and/or Hyper-V. Which brings up my final point.
Solid-state is not a solution, or more specifically, it’s not a silver bullet. It’s an enabling component of a broader solution. But a solution for what? Our customers have virtualized to reduce cost but have exposed a fundamental flaw with shared storage. Every workload impacts every other workload, which creates extremely unpredictable performance. More performance helps, but it doesn’t address the root cause that shared storage system performance is shared. Only new software features like storage performance QoS and service levels can deliver a solution to address the real problem of managing performance. Solid-state by itself doesn’t accomplish this – it’s only part of a broader solution.