Suspend disbelief for a moment and ponder the following questions: What would your business look like if we made it economical to run your storage infrastructure at speeds closer to that of memory than that of disk storage? How fast would your applications run? What business models would it open up? How much competitive advantage can you gain?
In the previous two installments of this article, I discussed the high cost of squeezing performance out of legacy storage for high-performance enterprise applications, and how legacy storage performance destroys value and increases costs.
Legacy storage vendors, wanting to prove value, will use the $/GB metric to justify their price tag. They will attempt to prove value by stating that their storage system costs $0.07/GB.
As I pointed out in part I of this article, the key for enterprise applications is not storage capacity; rather, it is storage performance and I/O randomization. To grasp the gravity of the situation, let’s picture a large-scale, 5,000 seat virtual desktop infrastructure (VDI) environment based on Microsoft Windows 7.
When deploying a new IT system, the four key questions are:
- What are the goals and what value is to be created?
- How do we achieve those goals?
- What type of systems should be implemented to meet the goals, and what is the required storage performance?
- Does the target system support the value that the initial goal demands?
For instance, your goal may be to reduce the total cost of ownership associated with the thousands of desktops in your enterprise. You will achieve those goals through centralized desktop management. In the process, reducing the costs associated with the needed infrastructure and the human management of that infrastructure creates value.
Gregor Waddell, Assistant Director IT, Anglia Ruskin University is joining us for a two part blog series where he shares his insights on the university’s VDI initiative.
In my previous post I outlined how VDI implementation reduced power consumption and increased IT performance at Anglia Ruskin University.
We opted for a VDI deployment to achieve our objectives to reduce power consumption while at the same time provide an optimum IT environment and experience for our 32,000 students.
The building hosting the new IT open access area assumed no need for cooling, presenting a heat issue for traditional PCs. At the same time our media-rich applications added to power consumption and also meant that thin client technologies weren’t an option.
These challenges all presented themselves at a time when VDI technology was coming of age, and addressed all of these issues. We went through months of planning and testing, and finally, in September 2011, we successfully launched the new Hosted Virtual Desktop using Violin flash Memory Arrays. As part of the process it became clear that storage performance is key to VDI and our existing traditional spinning disk did not offer sufficient performance. Our virtual machines needed 80-100 IOPs per desktop. This is where Violin was the instrumental partner of choice, making VDI a reality for us.
This post highlights key insights and considerations for successfully implementing a VDI infrastructure, given the huge research process we went through.