What’s the penalty for a cache miss?
Many legacy storage vendors – you know who you are – try to mask the slow performance of their storage systems with the use of caches of DRAM. This particular tactic results in hit or miss performance. (For the full list of 11 tactics, click here.)
The cache allows for some “lucky” I/O operations to complete with faster response times, a “cache hit.” However, “unlucky” operations whose data is not found within the cache, experience the extra penalty of a “cache miss,” unsuccessfully searching for the data, then having to take the additional hit of searching again on the underlying disk storage. As a result, performance becomes unpredictable - a problem exacerbated by operations such as backups and batch jobs which clear the cache of useful data.
The idea of a cache simply does not make sense when you can have the entire data set contained in ultra-fast, low-latency flash memory. Why not go all the way?