How To Beat The Cost of Disk?

Posted by Tim Stoakes on Jan 27, 2015 8:48:12 PM
Tim Stoakes
Find me on:


In this episode of the Architecture Matters blog series, let's look at the economics of all-flash arrays and Violin’s strategy to make all-flash arrays as affordable as enterprise disk solutions.

At first sight, this looks like a good challenge to address: a quick online comparison of the difference between a same-size disk and an SSD tells us that there is anywhere between a 3X to 10X cost difference between the two. Even though this is not enterprise-grade disk, the question remains: how to beat the cost of disk?

At Violin, we think that’s an excellent question to ask, and as with any difficult challenge, the answer is not just about focusing on one specific aspect of the architecture.

It is imperative to beat the cost of disk on a pure acquisition cost basis (hardware + software + support + services), but it is also important to consider additional savings that a Violin all-flash array delivers on operational and management costs.

For instance, Violin customers tell us consistently that we helped them realize significant acceleration of their apps and workloads and delighted their application admins and users. But they also share with us that the Violin deployment revealed how their server and application environment was often over-provisioned due to the inherent latency and IOPS performance issues of their existing disk solutions. Consequently, many Violin customers end up taking out a significant number of their application servers, freeing up rack-space and reducing software license, power and cooling costs.

We call this the “Violin Effect” and this is a direct result of our high-performance architecture. As I mentioned in my last blog, this is one of the reasons why you can’t have “too much performance” in an all-flash array.

These savings are excellent to start with, but Violin does not stop there. From an architecture perspective, our cost-reduction strategy boils down to really 3 simple key goals:

  1. Continue to drive down to the lowest possible cost of NAND
  2. Achieve the lowest possible cost of the system (everything else other than the NAND)
  3. Employ inline de-duplication and compression to reduce the actual data that is written

1. First and foremost, achieve the lowest possible cost of the actual physical NAND (the underlying technology of any all-flash array).

Many years ago, we decided to start building our own flash modules after realizing that SSDs would not give us the lowest possible cost.

The cost of an SSD is unnecessary inflated by two aspects that we eliminate by building our own VIMMs (Violin Intelligent Memory Modules).

First, all SSDs are physically over-provisioned, sometimes up to double of the amount the SSD is sold for (i.e an 800GB SSD can contain up to 1.6TB worth of NAND). The SSD vendors do this to ensure the SSD warranty can be guaranteed under any type of use case as well as to support garbage collection within the SSD. But ultimately, this actually doubles the NAND cost to the end-user.

Second, the SSD price also needs to cover all costs to develop, manufacture, distribute and sell the SSD as well the manufacturer’s profit on top of that.

toshiba-chipInstead, Violin developed strategic supplier relationships with companies that manufacture NAND, like Toshiba (the inventor of flash), allowing us to source NAND directly (removing the man-in-the-middle) and significantly driving down the cost (by adopting the latest possible NAND geometries/generations as early as possible and minimizing unnecessary over-provisioning).

2. An all-flash array needs to have the lowest possible system cost.

We believe that - unlike other all-flash array vendors - the best and most cost-efficient system is not built by using “off-the-shelf” (consumer-grade) components, but rather by designing from the ground up a simple enterprise-grade system that provides the highest performance at the lowest cost possible and delivering a system that is worthy of storing your mission-critical tier-1 enterprise data.

3. Inline de-duplication and compression enables storage to consume less flash.

Why does it make so much sense to use inline de-duplication and compression?

  • Flash is a perfect medium to do inline de-duplication and compression due to its low latencies.
  • Today’s enterprise has a lot of duplicate and compressible data, and doing both at the same time gives significant reductions. On average, a 6:1 reduction.
  • Enterprise primary storage gives the maximum statistical chances of finding duplicate and compressible data.

However, even with the most advanced DRAM and CPU technologies available today, the reality is that inline reduction of data through de-duplication and compression adds performance penalties compared to accessing the raw flash.

This is why the ability to choose when to turn off de-duplication and compression gives you the maximum flexibility to unleash the full power of flash for workloads that need it and should be a key part of any tier-1 primary storage all-flash array. Don’t need that much performance? Keep it on, and still take advantage of 10X lower latency and 10X more performance than disk, at the cost of disk!

This 3-step strategy has resulted in Violin’s all-flash arrays having the lowest possible price on any metric: raw, useable and effective, but never at the cost of performance and functionality.

Interested to learn more? Contact us and we will be more than happy to help you with your move from disk to flash and demonstrate our price leadership.

In the next installment of this blog series, I will share my thoughts around the software features that we think are essential for your next-generation all-flash tier-1 primary storage platform.

Stay tuned!

Topics: Business Applications, Technology Trends