Insights, Best Practices and Forward Thinking from the Customer Facing Team, Solution Architects and Leaders on Extreme Performance Applications, Infrastructure, Storage and the Real-World Impact Possible

Multi-tenant and Multi-instance Systems on All-flash Storage Arrays

by VIOLIN SYSTEMS on May 9, 2013

Modern hosting and service providers often face the challenge of managing the cost efficiencies of their platform.  A common solution is to deploy a multi-tenant or multi-instance architecture in which many customers share the same hardware.  The reuse of hardware over many clients drives down costs and also reduces the required ongoing administration.

Multi-tenant deployments host many clients in the same instance of software while segregating the client data through configuration.  Multi-instance designs are similar but run one instance of software per client.  To the storage tier, both approaches require many sub-ecosystems to run simultaneously in a shared space and will cause similar access challenges:

  • Unpredictability of usage
  • Height of individual usage spikes
  • Scale versus storage performance (more clients translates into a more random and parallel workload)

Legacy Architecture and Traditional Storage Solutions

To address random workloads, storage administrators traditionally have created large disk groups with wide stripes of data over many units (hard drives).  No one unit could satisfy the entire requirements so many units were aggregated together to create a larger virtual unit (LUN).  Also, there were many different workloads in the system so many different virtual units must be created to satisfy each need.  Until now this was the only solution and we lived with the negative repercussions of the Aggregation and Segregation storage model:

  • The storage tier suffers cost inefficiencies.  Each workload virtual unit (LUN) must be capable of delivering peak performance at any time despite rarely being at peak.  So most of the time, most of the storage tier is over provisioned.  Going against the premise of the architecture, the storage tier does not get the same efficiencies of reuse.
  • Hot spots form as data use migrates from client to client and many clients requiring simultaneous access to data on the same unit (disk/SSD) slowing down application performance.
  • Inefficient deployment.  Due to the segregation of workloads, only portions of the overall deployed speed could be used on any one workload at any one time while other portions are left underutilized.
  • Scale increases randomness of workloads.  Hard drives are designed for few and sequential access, not many and random access.

In time, the alternative became solid state drives (SSD).  SSDs can replace the disk drives, but they do not completely resolve the issues.  With SSDs the underlying architectural mechanics will remain the same:  segregation and aggregation.  Segregation is required to create LUN groups for each of the many workloads.  But, this eliminates using all of the deployed SSDs on every workload, every time.  Segregation also requires choosing a number of units and managing that number over time.  Aggregation of SSDs into groups introduces a write-cliff multiplier.  For each SSD added to a LUN, the likelihood of having a write cliff influenced performance drop is multiplied.  This limits the number of effective SSDs in a unit and creates a scaling cap.  Hard drive controllers were never designed to work around NAND flash write cliff issues.

The Future of Mixed Workload Architectures

Violin Systems’s all-flash arrays solve the core storage issues in today’s multi-tenant and multi-instance deployments.  They eliminate the pain of heavily random, parallel and unpredictable workloads by providing a unique blend of an all-silicon technology with a distributed block architecture.

Mixed and Unpredictable

Random Access Storage (RAS).  A memory-like architecture where every storage address is equally accessible, at the same speed, all the time.   Any workload accessing any data will perform at the same speed all the time.   When sequential and random are processed the same then any number of workloads can be active at the same time allowing for scaling (parallelization) without performance degradation.

Random and Parallel

Distributed block architecture.  With every IO (Input/Output) hitting every physical component every time then parallelization of flash is at its maximum, delivering the best possible speeds to every operation, every time.  Segregation of units into LUNs decreases the parallelization instead of maximizing it.  With modern CPU’s scaling by adding more cores (versus higher GHz per core) it is parallelization that allows for application scale.

Write-Cliff Avoidance

Not all flash degrades in performance.  Violin’s NAND flash is intelligently managed at the card, controller and array levels to eliminate the core issue with sustained flash write performance.

Cost Efficient

Every gigabyte of space is 100% usable.  Never again will you buy space to get speed or abandon space on disks that are already allocated.  And, since every IO will work at the top speed of the array every time, the ongoing administration drops to near zero.

With a distributed block architecture where all data is equally spread over all components all of the time and random access storage (RAS) where every storage address is equally accessible at all times an administrator can always get the maximum performance out of the storage tier for every workload, every time, with very little effort.  It does not matter which client caused a usage spike, where the clients’ data is located or how many units are in a LUN group.  As the number of client scale a Violin Systems all-flash storage array actually goes faster, not slower.   And, with Violin, all storage space is usable.

Violin’s unique architecture drives down costs and brings performance stability to today’s most inefficient tier in multi-tenant and multi-instance architectures.