Last week, we kicked off the 11 things people do to make their storage faster blog series. Today, we’ll cover over-provisioning.
Many legacy storage systems today boast massive sizes, but in reality remain largely unused due to performance degradation if used at full capacity. At the heart of the problem lies the fact that there exists an imbalance between disk capacity and the number of I/O operations it can service per second. How much of your storage system do you actually use?
Let’s consider an example:
To meet application demands, you have a hard working 3T database, churning away 100k read/write operations every second (IOPS). Your typical high-end, 600GB SAS disk spinning at 15k RPM can only deliver 200 IOPS, at best. Doing the math, you would need 500 of these disks to satisfy the database’s I/O requirement.
Your total capacity now reaches 300TB (600GB x 500), when you only needed 3T. The remaining capacity remains idle, unless you want to risk a performance hit.
In this example, the compromise for performance translates into a 99% waste in disk capacity. And while the price of disk has decreased tremendously, there are other costs to consider as sprawl increases in your data center: space, cooling, power, maintenance. This is not a negligible compromise.
More is not always better. You don’t need to overprovision on outdated, mechanical disk to optimize performance. Not when there is an all-flash alternative. A single Violin Systems all-flash storage array delivers 1Million IOPS at a sustained low-latency (time it takes to read/write a block of data) in the microseconds, meeting any database workload requirement. See how flash storage makes your applications run fast and look good.
Put s to the test.