Insights, Best Practices and Forward Thinking from the Customer Facing Team, Solution Architects and Leaders on Extreme Performance Applications, Infrastructure, Storage and the Real-World Impact Possible

Why Storage Performance Matters - Part II

by VIOLIN SYSTEMS on March 7, 2013

Legacy storage vendors, wanting to prove value, will use the $/GB metric to justify their price tag.  They will attempt to prove value by stating that their storage system costs $0.07/GB.

As I pointed out in part I of this article, the key  for enterprise applications is not storage capacity; rather, it is storage performance and I/O randomization.  To grasp the gravity of the situation, let’s picture a large-scale, 5,000 seat virtual desktop infrastructure (VDI) environment based on Microsoft Windows 7.

In a VDI setting, there are four distinct lifecycle periods during which IOPS needs to be characterized:

Activity IOs
User Login 2,000
Application Launch Varies by application: Approximately 800 for Outlook, 500 for a standard Web browser, and 500 for Microsoft Office Word
Steady State 8-50
User Logoff 300

When deploying a storage backend for the VDI system, the typical calculation is to look at the steady-state IO rate, add some cache, and then deploy enough spindles to handle that load.  A “power user” generates 18-24 IOPS in a Windows 7 environment, with the average user needing half as many IOPS.  To be on the safe side, the storage administrator could pick the 25 IOPs number, multiply it by the 5,000 VDI seats, and divide it by the 100 IOPS that each spindle can provide for a total of 1,250 hard disk drives.  But, because those disks will be in a 6+2 RAID6 configuration, the total number of spindles is actually 1,650.  At 3 TB per disk, we will have a total of 5 PB of storage.  User logon is highly correlated with application startup, generating some 3,500 IOPs.  This heavy IO period happens at start of the workday.  If we assume that 90% of the 5,000 workers logon in a 30-minute period, for example between 9:00 am and 9:30 am, we can anticipate about 16 million IOs in the morning boot storm just from the VDI workload.

Collectively, these disks will deliver a peak of 125,000 IOPS.  During peak time, some 16,000,000 IOs are need.  The difference between peak need of this VDI system and the peak performance of the storage subsystem is addressed through user wait time, also known as latency:  All employees logging into this VDI system will have to wait for the system to respond.  This latency causes lost productivity, which of course has its own price tag outside of the capital and operational expense needed to keep 1,650 disk drives spinning.

Notice that we needed 5PB of storage get to 125,000 peak IOPS.  That 5PB is foisted to the enterprise on the premise that it costs a mere $0.07.  However, when you consider that that (a) a fraction of the capacity is needed; (b) a few racks of new equipment would have to be deployed; (c) additional licenses for storage management software is needed; (d) much more power, real estate, and cooling is consumed than warranted; and (e) additional headcount is needed to handle this large system, the actual $/GB metric creeps up significantly, and as much as two orders of magnitude.  So, you have a system that really costs around $7/GB for a measly 125K IOPS peak performance.

Legacy Storage Performance Costs

Storage value, therefore, is the price of capacity within a target performance window.  Getting that performance, using legacy storage, drives up costs significantly and destroys value.  And having an underperforming storage subsystem destroys value and impedes an enterprise’s competitiveness by making critical business applications, like VDI, OLTP systems, or real-time reporting, slower than real-time.

Flash-based storage arrays redefine the storage value curve in a way that legacy storage systems cannot.  In the next installment of this article, I will examine how Violin’s flash-based memory arrays provide the right high-performance solution for enterprise-grade business critical applications.