SSD - The Big Technical Illusion
Enterprise Solid State Storage is the hottest technology to hit the storage market since the first Winchester disk drive was introduced in the IBM PC-XT, way back in 1983. And, as with any revolutionary technology there is a lot of hype. There are promises to solve all of your storage I/O bottlenecks, along with a significant reduction in power, cooling and datacenter footprint, all by eliminating your spinning “rust”.
Here’s truth of the matter. If your SSD implementation involves unplugging a legacy disk drive and replacing it with a Solid State Drive (SSD), you’ll likely end up spending a significant amount of money without recognizing the promised benefits of SSD. Deploying SSDs behind what is historically been a disk drive RAID controller is almost guaranteed to be a disappointment.
A quick analysis of legacy disk drive based SAN and NAS systems reveal some architectural issues. Typical controllers in a midrange storage system are designed to deliver I/O for up to as many as 1000 disk drives. Its processor, firmware RAID and other algorithms are all tuned for I/O to disk drives. Herein lies the problem.
First, let’s examine IOPS. If a 15k drive can deliver 200 IOPS (a generous assumption), then with 1000 drives simple math says these controllers should deliver up to 200,000 IOPS. Realistically, most of these controllers can’t come close to that because of other bottlenecks and overheads. But, for comparison sake, we’ll use this best case scenario. Now, if you look at the specification of an SSD, a typical drive can deliver 50k IOPS. Thus, from an IOPS perspective it only takes 4 SSD drives to saturate legacy, 1000-disk controllers. Essentially, placing an SSD drive behind a SAS link is akin to placing RAM behind a SAS link - it makes no sense. Most of the storage vendors that deploy SSDs behind a controller will offer benchmarks of 8 drives or less so they don’t expose the controller bottleneck.
What about throughput? Placing SSDs behind a legacy drive controller results in a similar story. A typical SSD drive is capable of about 4 Gbps throughput. A legacy RAID controller in a SAN or NAS system sits on an x8 PCIe bus, which is capable of about 32 Gbps in throughput. It is likely that the processor and firmware on the RAID controller can’t come close to 32 Gbps. But if it could, it would only keep a maximum of 8 SSDs busy before saturating the bus.
Another questionable technology for SSDs is RAID. The original definition of RAID is “Redundant Array of Inexpensive Disks.” Do you think SSDs are inexpensive? Somehow, I doubt it.
NexGen believes Solid State Storage belongs directly on the PCIe bus, and has architected its SAN from the ground up for PCIe Solid State. Where traditional storage vendors have their entire disk engine behind one or two PCIe slots, NexGen dedicates an entire Gen 2 PCIe slot to a single Solid State device. This allows NexGen to deliver more IOPS than legacy controller-based SANs, while requiring less Solid State capacity. Less Solid State means lower cost, and a significant reduction in power, cooling and datacenter footprint. NexGen can scale Solid State up and out using PCIe as the backplane. This alleviates all of the SSD bottlenecks associated with legacy storage solutions. No more RAID, SAS, SATA, FC, etc.Clearly, the data path of disk-based architectures was not designed to handle the performance capabilities of SSD drives. There is an incremental performance improvement, but at what cost? SSD drives are the most expensive type of storage. So, if they can’t achieve their performance potential, why waste the money?
By integrating solid state next to the CPU on the PCIe bus, NexGen allows solid state to run at the speeds for which it was designed.
This is the first half of a two part blog series focused on PCIe Solid State Storage. Stay tuned…