Leaders across countless industries are working to figure out the best solutions for high capacity data storage management. In order to stay relevant, data storage companies have to provide products that are effective, efficient, and innovative. Falling back on data storage solutions that are years out of date simply isn’t an option.
Topics: flash storage
Sometimes the best way to advance the state of technology is to return to past architectures. I call this approach "back to the future" innovation.v
At a young age, we are pre-programmed to believe that high performance is expensive. As kids, we played with toy cars of Ferrari and Porsche that are known for performance but associated with high cost and only the rich typically own. Additionally, such cars are not known for efficiency or for that matter reliability until the dawn of Tesla electric cars. Tesla offers high-performance acceleration, zero gas costs, low cost of operation and it truly transforms the pre-programmed thinking. Unfortunately, the acquisition cost of a Tesla is not that affordable for everyone, yet.
With all the technology approaches in the market place to modernize IT, it is a challenge to maneuver through all the noise and figure out which is the best approach for you and your company. Whether you are considering cloud, hyper-converged, revolutionary database products, or a new dev ops program, it all seems daunting. In recent conversations and blogs that I have read, any of these changes need to be accompanied by a change in cultural and skill set. The complexity of the changes and the impact they have on an organization reminds me of the process I went through to decide whether or not to modernize my existing home or buy new.
I will not be selling you anything in this blog. I will be sharing a small part of the thought process Ferrellgas (NYSE: FGP) exercised as they evaluated their flash storage options and why they decided to go with Violin Systems and the 7300 Flash Storage Platform at their last refresh.
This post is part of a series of posts (starting with XtremIO - At the Bit Level) where I will explain not only the significant architectural failings of the XtremIO product, but also the fundamentally and deliberately misleading way they have presented their product to the market, in my opinion.
During EMC’s launch of their XtremIO product, they made a number of jaw dropping claims, which they now seem to be trying to pretend were never said, such as one XtremIO blogger who comments:
“One of the eye opening claims we made during our launch on November 14th was that the XtremIO array doesn’t have any system-level garbage collection processes. In the coverage and chatter that followed our launch we noticed that some people interpreted this to mean that the flash in our arrays was somehow impervious to the need for garbage collection, which of course is impossible. To be clear, all flash requires garbage collection. What matters is where and how it is performed. With XtremIO, performance is always consistent and predictable because garbage collection is handled in a very novel way, only possible with XtremIO’s unique architecture.”
What is the very novel way that XtremIO handles garbage collection? What is it about their architecture that is so unique that only their approach makes this possible? In this second post, we try and answer these questions, and we are sure the answers will surprise you.
Modern hosting and service providers often face the challenge of managing the cost efficiencies of their platform. A common solution is to deploy a multi-tenant or multi-instance architecture in which many customers share the same hardware. The reuse of hardware over many clients drives down costs and also reduces the required ongoing administration.
Multi-tenant deployments host many clients in the same instance of software while segregating the client data through configuration. Multi-instance designs are similar but run one instance of software per client. To the storage tier, both approaches require many sub-ecosystems to run simultaneously in a shared space and will cause similar access challenges:
- Unpredictability of usage
- Height of individual usage spikes
- Scale versus storage performance (more clients translates into a more random and parallel workload)
SSD’s are like cordless phones and DVD’s. They made an improvement on an existing technology but didn’t revolutionize its use. In technology there is a difference between the concept of modernizing and revolutionizing. Modernizing is finding a way to do the same thing, just a little bit faster or little bit easier. Revolutionizing is either eliminating or vastly changing how something is done.
Flash is about latency and IOPs so why would it be good for Data Warehousing or Business Intelligence?
Excellent question. Yes, the typical marketing and wow-factor stats around flash are based on latencies and IOPs (Input Output Per Second). Data warehousing (DW) and Business Intelligence (BI) is normally a throughput game, so what gives?
Transactional workloads are commonly defined as being small atomic pieces of work. This is in contrast to decision support, Data Warehouse, Business Intelligence or otherwise labeled reporting systems that require fewer, larger, more sequential workloads. Updates, inserts, deletes and even small result-set selects are all included in OLTP, transactional efforts.