Insights, Best Practices and Forward Thinking from the Customer Facing Team, Solution Architects and Leaders on Extreme Performance Applications, Infrastructure, Storage and the Real-World Impact Possible

The Need For Speed — IOPS vs. Latency

by VIOLIN SYSTEMS on August 8, 2016
 

Many in IT strive for the Google experience – the ability to deliver SLAs that allow for instant log-on, instant access to information, and instant results. Do a search on Google – and the expectation is the result will be served up in under a second, as shown in the below example:

You and I are now conditioned for this rapid response and so are our respective internal and external customers.

The question to ask yourself is: “Can my storage environment support this expectation?”

To help you Be Instrumental in delivering this need for speed, we are going to explore why latency matters over the course of the next five weeks, sharing leading third-party points of views as well as our own so as to equip you with the right insight so that you can be successful.

In return for joining us on this journey, you will have the opportunity to win a day at the famed Skip Barber Racing School, among other prizes.



So let’s start driving.

 

Across storage brands and models, the constant metric used compares I/Os per second (IOPS) that their array can deliver to that of their competitors. Given its prevalence, you would think that this is the one metric that separates the best arrays from the rest.

I get the human mindset and need for speed, but for those who choose to test drive and ultimately buy high-performance machines (whether they are race cars or storage arrays), what really matters is how fast can you go from zero to 60 (or 90 or 120) and can you sustain that performance over time. What doesn’t matter is if your speedometer reads 120, 180 or even 260 (420 km/h) for merely a split second.


IOPS vs. Latency

IT professionals should think of IOPS as the number of transactions that an array can pass through an aggregate of all its ports in a given second. While latency is the amount of time it takes to process the transaction within the system.

You can think of IOPS as a measurement of the top speed a particular race car can reach. If one car has a top speed of 130 miles per hour and another has a top speed of 200 miles per hour, which one will win the race? Well, it depends on the context of their latency. How long does it take each car to reach their top speed? If the turns on the track limit the cars to about 80 mph and the 200 mph car accelerates to 80 in 12 seconds, but the 130 mph car can do it in only 8 seconds, which car is going to win? The one with a higher top speed (IOPS) or the one with better acceleration (latency)?

IOPS are an important measurement of storage performance when taken in the context of storage latency. The difference between an array that can do one million IOPS and one that can do 300,000 IOPS is not relevant to you if your application only does 50,000 IOPS. However, the difference between one that can do 50,000 IOPS at 500 microseconds compared to one that does the same 50,000 IOPS but at 2 milliseconds can fundamentally alter your data center. It may sound like we are talking about the difference of a few thousandths of a second, but in this example it represents a real-world performance difference of 4x.

The less latency that your storage has, the more that it can do. Your storage can act on requests faster and deliver more data to more processors in less time. This means your applications can run faster, you may need fewer servers, and the servers you already have can do more. That’s why low latency storage matters.

 

Enter To Win a Day at

Skip Barber Racing School