Today we announced a really cool new product called the XVS 8. In a nutshell, we made the fastest enterprise storage in the world – faster. Speed, and by speed, I mean ultra-low latency, can do some amazing things. It does the obvious, it lets your applications run faster, increasing profitability and customer satisfaction. Lowering latency does another cool thing, it lets you consolidate your infrastructure.
Listen to Ashish and Narayan share their thoughts on Violin's presence at this year's VMworld in SF:
How can your hosted virtual desktop deployments achieve the following?
- Sub-second boot time per desktop, which includes VMware tools installation
- Powerful performance despite continuous boot storms thrown in the mix
- Sub $500 per desktop for better than dedicated-desktop performance (list price, all components and SW licenses included)
What’s stalling virtual desktop deployments?
Virtualizing desktops has been one of the top 5 IT initiatives for the last few years. However, about half of the VDI projects are stalled due to unacceptable user experience and project cost overruns. Bad user experience leads to low adoption. The user experience issues become acute when small pilots are scaled up from 100s of desktops to 1000s. Over-provisioning of storage and constant tuning leads to increased cost.
Suspend disbelief for a moment and ponder the following questions: What would your business look like if we made it economical to run your storage infrastructure at speeds closer to that of memory than that of disk storage? How fast would your applications run? What business models would it open up? How much competitive advantage can you gain?
Modern hosting and service providers often face the challenge of managing the cost efficiencies of their platform. A common solution is to deploy a multi-tenant or multi-instance architecture in which many customers share the same hardware. The reuse of hardware over many clients drives down costs and also reduces the required ongoing administration.
Multi-tenant deployments host many clients in the same instance of software while segregating the client data through configuration. Multi-instance designs are similar but run one instance of software per client. To the storage tier, both approaches require many sub-ecosystems to run simultaneously in a shared space and will cause similar access challenges:
- Unpredictability of usage
- Height of individual usage spikes
- Scale versus storage performance (more clients translates into a more random and parallel workload)
SSD’s are like cordless phones and DVD’s. They made an improvement on an existing technology but didn’t revolutionize its use. In technology there is a difference between the concept of modernizing and revolutionizing. Modernizing is finding a way to do the same thing, just a little bit faster or little bit easier. Revolutionizing is either eliminating or vastly changing how something is done.
In the previous two installments of this article, I discussed the high cost of squeezing performance out of legacy storage for high-performance enterprise applications, and how legacy storage performance destroys value and increases costs.
Legacy storage vendors, wanting to prove value, will use the $/GB metric to justify their price tag. They will attempt to prove value by stating that their storage system costs $0.07/GB.
As I pointed out in part I of this article, the key for enterprise applications is not storage capacity; rather, it is storage performance and I/O randomization. To grasp the gravity of the situation, let’s picture a large-scale, 5,000 seat virtual desktop infrastructure (VDI) environment based on Microsoft Windows 7.