SSD’s are like cordless phones and DVD’s. They made an improvement on an existing technology but didn’t revolutionize its use. In technology there is a difference between the concept of modernizing and revolutionizing. Modernizing is finding a way to do the same thing, just a little bit faster or little bit easier. Revolutionizing is either eliminating or vastly changing how something is done.
Cordless phones allowed a person to walk anywhere in their house and DVD’s allowed for more data to be transferred than CD’s (4GB vs 700MB) but neither really changed the usage pattern of the technology. It was the cellular phone that turned a phone number into a person vs a place and allowed a user to walk anywhere in the world. It was the USB sticks that allowed for a rewritable file transfer medium. Technology modifications that just make something a little bigger or faster is a refresh, not a revolution.
On to modern day storage technology.
SSD’s are specifically designed to be a disk drive refresh. We took the 2.5” form factor disk drives, removed the spinning platter and replaced it with flash chips. They have the same physical size with the same physical connectors to make it easy to plug right into existing storage solutions. Pop out the HDD’s, pop in the SSD’s and viola, it’s faster. But, we still have the same basic architecture (aggregation and segregation), the circus to manage it and the issues it brings:
- How many different types and owners of workloads
- How many different LUN groups are needed for all the workloads
- How many units should be in each LUN group
- What RAID to put on each LUN group
- What speed units to put in each LUN group (15k, 10k, SATA, SSD)
- How to keep one workload from affecting another
- What happens when the IO profile changes (OLTP to backups, adhoc database queries, etc)
- How do we scale (how many additional units to achieve new IOPs or latency goals)
- Buying space to get speed
- Segregating units (isolates workloads but keeps the system from deploying its full speed at any one time and abandons space for cost inefficiencies)
- Legacy chassis controller’s speed bottlenecks
- Legacy engines IOPs bottlenecks
- Introduced a Write Cliff issue (another topic for another day)
- How to deal with hot spots and data locality as workloads shift
- Significant drop in performance during a unit failure / RAID rebuild
- Software to manage tiers
SSD’s are faster than hard drives but deploying them in the same way does not vastly change how storage is managed. SSD’s are a modernization, not a revolution, in storage architecture. It’s the same basic system, just with faster units.
So what then would be a revolution? A distributed block, all-silicon storage architecture that allowed for:
- Random Access Storage (RAS). A memory-like architecture where every storage address is equally accessible, at the same speed, all the time. Any workload using any data will work the same at any time. When sequential and random become the same then any number of workloads can be active at the same time allowing for scale (parallelization) without performance degradation.
- Distributed block architecture. With every IO hitting every component every time then parallelization of flash is at its maximum delivering the best possible speeds to every IO, every time. Segregation of units into LUN decreases parallelization instead of maximizing it.
- Write cliff avoidance. Not all flash degrades in performance. Violin’s NAND flash is intelligently managed at the card, controller and array levels to eliminate the core issue with sustained flash write performance.
Why is this a revolution? With a distributed block architecture where all data is equally spread over all components all the time and random access storage (RAS) where every storage address is equally accessible at all times an administrator can get the maximum performance out of the storage tier for every workload, every time, with very little effort. Only the invention of a new technology can allow something to quickly become cheaper and faster and more easy to use. All-flash arrays are that new technology deployed in its proper form. No software, no gimmicks, no pain.
Storage purchases will usually have a production life of 3 to 5 years. What do you want your data residing on 3 years from now? Something modern or something revolutionary?