The speed of your system matters. When your users are sending and receiving large volumes of data, slow operations can be a problem.
When talking about your storage system’s speed, you’ve probably heard about throughput and input/output operations per second. IT technicians swap stories about optimizing their storage systems’ speed the way top race car mechanics talk about getting their engines to move at over a hundred miles per hour or more.
The speed of storage systems and input/output operations per second is regularly evaluated in performance tests aiming for peak performance. In 2019, technology broke all previous speed records, landing at 13.8 million input/output operations per second on a non-volatile memory express (NVMe) storage system.
World records aside, why do input and output operation speeds matter? Why so much talk about inputs, outputs, and throughput? Let’s look at input/output operations per second and throughputs, which makes them different and maximizes performance.
All About IOPS
IOPS is short for input/output operations per second and is commonly pronounced as “i-ops” or “eye-ops.” IOPS is a standard unit of measurement for how many read and write operations to your storage system your non-contiguous storage location can perform. It’s a reliable way to measure performance in solid-state drives (SSDs), all-flash systems, and storage area networks (SANs). The other way to think of IOPS is the time it takes for your storage system to perform an input/output operation per second from start to finish.
When we talk about IOPS, it’s not just depicting the transfer rate for an all-flash array or how fast data can be transferred from contiguous storage locations. It’s also a great way to track the storage performance of your system.
Think of IOPS as a comparison of how car engines operate. We evaluate car engines in revolutions per minute (rpm). When a car is in neutral and the engine is running, it doesn’t mean much to know that the engine can spin at 10,000 rpm; this means nothing when the car isn’t going anywhere. Likewise, when paired with information about the size of the data block and read/write activity capabilities, IOPS can give you a good idea of what to expect from the performance of your storage system—your “engine.”
So, what is “throughput?”
Throughput refers to the actual capacity of your system to send data to another. It’s a measurement of how much data can pass from point A to point B in a given amount of time. It’s the speed of data transfer measured in megabytes per second.
Some confuse throughput and bandwidth, but these are two very different (albeit related) ideas. Think of throughput and bandwidth as a multi-lane highway. The bandwidth is the total capacity of your system to send data. You might have two lanes, six lanes, or more. The number of lanes is like your bandwidth.
But bandwidth and throughput are a little different. Like a highway, certain factors affect the speed of your system. While your bandwidth is a measurement of your system under perfect conditions, this just isn’t realistic. When it comes to your enterprise system, many things can affect the actual speed, including:
- Traffic load
- Packet loss
- The power of your hardware
- Encryption and decryption
- And much more
With all of these factors at play, your bandwidth becomes your throughput. It’s like a measurement of how fast traffic can go.
IOPS, Throughput, and NVMe Storage
Choosing the right storage system for optimum speed and performance is a lot like choosing the right kind of race car for a particular race track; it depends on what you need it to do. Talking about IOPS is like selecting a sleek racer or a two-seater sports car with a powerful engine, designed to go fast.
On the other hand, throughput has been compared to a school bus, transporting the most passengers (your data).
On their own, both are necessary, but having the combination of IOPS and throughput working in tandem is what will ultimately help you “win the race.”
What does this mean for your system?
When you need a high number of read and write operations and excellent throughput, NVMe all-flash arrays are a reliable choice.
NVMe all-flash arrays take advantage of high-speed SSDs, with their low latency and parallelism to do the following things:
- Improve IOPS
- Bolster throughput
- Reduce latency
The best part? An NVMe storage device will scale with you to support your future high-performance needs and devices that rely on persistent memory technologies.
When it comes to throughput, NVMe is made for speed the same way a top-of-the-line race car is, with as many as 64,000 operations queues. With an all-flash system, NVMe throughputs can happen at a rate of 32 gigabytes per second, and it’s not at all uncommon to experience over 500,000 IOPS, with some higher-end drives reaching 10 million IOPS. The best part is that even at these high speeds, latency rates still tend to stay below 20 microseconds, with some well below half that.
Compared to legacy storage systems, you just can’t beat that kind of performance.
That’s not all an NVMe all-flash array can do. With a solution designed for high-performance applications, you’ll also see the following benefits:
- Lower energy usage
- 100 percent performance (even at 100 percent capacity!)
- Fewer needed server licenses for your team
- Compatibility with most software interfaces
- Reduced total cost of operation
At VIOLIN Systems, we understand that to optimize the power of your storage system. It’s not just about input/output operations per second or your throughput capacity; it’s about your entire storage system as a whole, optimizing speed and capacity together with a reliable, affordable, scalable solution.
We’re here to help you get the most out of your storage system with an NVMe, all-flash array. Want to discuss what an all-flash array can do for you? Contact us today!