Why all the modern Storage QoS Implementations are not good enough?

Storage QoS is starting to become a key functionality today. It is mainly driven by a drive towards increased resource utilization and therefore resource sharing. In the past, Storage would have dedicated RAID group for LUNs and the applications would have LUNs dedicated to it. This worked really well in terms of guaranteed IOPs and Isolation. As disk sizes increased and storage technology matured, we started sharing the drives amongst the LUNs and these LUNs were no longer Isolated from each other. Virtualization made the situation even worse because not only the disks got shared amongst LUNs, the LUNs themselves got shared by multiple workloads (VMs). This resulted in Noisy Neighbor problems that impacted these LUNs and Volumes based storage systems.

A few Storage Vendors have some form of manual QoS functionality built-in the storage OS. Tintri for example, from the beginning, built an architecture that enabled an Always ON, fully automatic, dynamic QoS at an individual vDisk level (think VMDKs, VHDs etc.) that ensured automatic storage resource reservation at the vDisk level (based on our built-in IO analytics engine) so that every vDisk gets the performance it needs at sub-ms response time. The architecture is designed such that a new vDisk gets its performance from free reserves available in the system. So, at no point another vDisk that needs more performance impacts an existing vDisk. The approach is different from traditional approach where one manually sets up QoS at a LUN/Volume level but is highly effective for IT organizations that don’t want to hand hold the storage system. Tintri is the only storage product out there that has an Always ON, dynamic QoS enabled within all its storage appliances.

Screen Shot 2015-04-06 at 4.42.02 PM

Having said that, setting up QoS manually does have its play in Service Provider (SP) Space as well as some Private Cloud implementations where the SP (Public or Private) doesn’t want to give everyone unlimited performance. These SPs want to be able to sell let’s say Platinum Service to their customers and do it dynamically on the fly without even moving the workload. So, coming back to my original point about QoS implemented by Storage Vendors today. Here are the reasons why I say they are not good enough –

Granularity Challenge

In today’s datacenters, workloads are Virtual and Clouds are not implemented without Virtualization. In these Virtualization enabled datacenters, dealing with LUNs/Volumes is a pain. LUNs were brought into the industry 30-40 years back, when the workloads were physical and we started using these LUNs/Volumes even with Virtualized workloads because thats what the storage systems knew. In a virtual environment, a LUN has multiple workloads running in it. Implementing QoS on LUNs has no advantage for virtual workloads whether it is being implemented for isolation or chargeback.  VVols would change it (only for vSphere) but there is still a long way to go there as VVols don’t support all the vSphere features and not all vendors have a practically deployable implementation.

The result is that VMs in a LUN/Volume end up sharing the IOPs limits set up at the LUN/Volume level and therefore end up interfering with each other.

The IOPs Dilemma

Storage QoS is implemented by IOPs. One can combine it with MB/s but only a few vendors allow you to do that. Usually, it is just one or the other.

Now here is the problem, IOPs can have different meaning based on the block size. If I am limiting a LUN/Volume to a 1000 IOPs here is what it could mean –

4K Block size means 4MB/s

8K Block size means 8MB/s

64K Block size means 64MB/s

The same 1000 IOPs can mean 16x more load on a system when looking at 64K block size Vs 4K. That is a lot of difference for a Service Provider to take into account when deciding the pricing for a service. In some cases even large number of small block size IOs may impact storage more than large block IOs. Now some vendors can combine the IOPs limit with throughput to get around this to some extent but ideally service providers want one unit to bill against and want a single scale to measure everyone. Microsoft’s implementation of Normalized IOPs is a great example of such a metric.

The Throttling Effect

Some Storage systems using QoS on LUNs have this problem to deal with specifically when it comes to hosts that have more than one LUN coming from a storage system mapped through a HBA. When one implements QoS limits on a specific LUN and that LUN tries to go above that limit, it gets throttled by the storage system. The IOs get queued up at the HBA level and at that point the host starts to throttle the IO to the LUN and it does that for not just the LUN in question but to all the LUNs coming from that storage, thinking that storage system is not able to take the load that it is trying to send. This makes it practically impossible to implement QoS at an individual LUN level without impacting other LUNs.

QoS_Throttle_fairshare

The Visibility and Analytics Challenge

Most of the storage vendors have QoS more as a check box with a very few real world deployments. The reason is that QoS is really complex to implement and there are more chances of getting it wrong than right. QoS has to be implemented like a strategy and across all workloads. The challenge is that once someone gets it wrong it is not easy to fix and requires involving Vendor Support teams to determine the cause. Some vendors sell Professional Services around this, which makes it a really expensive feature to implement.

The other point being that QoS itself can become the cause of latency either because of the Max Limits setup on a workload or because of the contention resulting from the cumulative Minimum guaranteed IOPs set up on various workloads exceeding the overall performance capability of the storage system. Ideally the storage systems should give more insight into QoS and its impact on various workloads so that if someone complains of latency or drop in performance, the IT team is quickly able to pin point the reason. None of the storage vendors provide advanced user friendly analytics for QoS today and that is one of the biggest inhibitors in terms of real world adoption of QoS.

To summarize, QoS offered by storage vendors today is not granular enough, it doesn’t have a single scale to measure or apply QoS guarantees/ limits, it doesn’t ensure performance fair-share, it doesn’t guarantee isolation and storage providers don’t have the necessary analytics associated with it to make it easy to implement and then troubleshoot QoS related issues. I think its time to address these challenges so that QoS can be widely accepted and implemented in the datacenter.

Cheers..

@storarch

FY16 , VVols and Tintri’s Financial Differentiation

vSphere 6.0 is GA and Tintri announced support for vSphere 6.0, VVols and VMware Integrated Openstack along with a plugin for vRealize Operations (vROPs) at PEX this week. We also finished our FY in January and will be off to our Sales Kick Off next week. FY15 was a great year with tremendous growth and record QTRs for Tintri. Tintri continues to lead the way with a product designed from the grounds up for both Flash and Virtualization. With vSphere 6.0 we would bring all the goodness that customer’s love about VMstore to VVols including some of the key differentiators that would separate us from the pack –

  • 99-100% IO from flash
  • VM granular operations
  • VM granular visibility and latency visualization
  • Per VM/VVol analytics
  • Automatics Per VM partitioning of Storage resources (Performance Reserves) based on the analytics
  • QoS and Performance Fair share at a VM level (there are a lot more exciting things coming in this space. Stay Tuned for an update on this….)
  • Latency Visualization across the infrastructure (Host, Network, Storage). We are going to add more to this in the coming weeks … stay tuned
  • 1M VVols per VMstore
    • With VVols implementation, a VM may need as low as 5 VVols and as high as 100s of VVols (with snapshot, clones etc.) and a 1000 VM install would require 10s of Thousands to 100s of thousands VVols.
  • VM Granular Automation
  • Ability to Manage, Monitor and analyze up to 112,000 VMs from a single pane of glass

The VVol race that the storage Vendors are starting now was won by Tintri four years back. If you would be evaluating VVols in the coming weeks/months, you should definitely read this blog here to understand what you should ask from your storage vendor when it comes to VVols.

Tintri from the beginning focused on using software to drive innovation and one of the key differentiators about the technology is its ability to deliver 99-100% IO from flash, which is driven by our software unlike all-flash vendors that use Brute Force to deliver performance. The advantage is that Tintri can address a broad spectrum of workloads at a very low cost.

Workload Breakup

What it means is that unlike an All-Flash solution where $/IOP is low and $/GB is high or a hybrid solution where $/IOP is high and $/GB is low, Tintri brings both $/GB and $/IOP at low levels without over depending on Space Savings (Dedup & Compression), therefore delivering a better $/workload at a very high density.

$/GB

Our focus on Virtualization continues to help us differentiate and bring new Virtualization & Cloud Centric functionality faster to the market. The result is a platform that is 5-10x cheaper on CAPEX, 60x cheaper on OPEX and highly scalable.

Cheers..

@storarch

Why Tintri’s milestone of 200,000 VMs and 16PB of Storage is bigger than you think?

Yesterday we announced a significant milestone of 200,000 VMs deployed on Tintri with close to 16PB addressed by these VMs.

Now, these numbers may not look big on their own, but they become really interesting when you combine it with the fact we have doubled the number of VMs in last 6 months and that capacity numbers are for VMs experiencing sub-ms performance. All of this has come at the cost of incumbent storage providers where Tintri beat Modern & Traditional Storage Vendors in process.

Tintri brought a revolutionary Storage platform in 2011 designed for Flash just to run VMs and vDisks. The idea was to have a storage platform that was designed with Virtualization in mind and not Physical Workloads. To date, Tintri VMstore is the only external, best-of-breed platform that has laser focus on Virtualization. The General Purpose Platforms (Traditional & Modern, Flash & Hybrid) continue to focus on Physical workloads even as the customers are moving towards 100% virtualization.=

Where Tintri continues to have edge across multiple Hypervisors is the vDisk level visibility of workloads as compared to General Purpose Platforms that have no idea about what happens at the Hypervisor layer.

One of the drawbacks of this is the IO blender effect that everyone talks about with no effort to provide any solution except throwing flash at it. Tintri is the only platform that addresses it, delivering high performance, sub-ms response times, vDisk granular isolation, predictability, analytics across the infrastructure and per VM data services.

VMware VVOLs is the step in the right direction but it is just an enabler for one hypervisor and it’ll depend upon a vendor on what functionalities it brings to the storage platforms. With all our experience in managing and understanding VMs, Tintri is committed to bring all the goodness that the customers have experienced on the VMstore and much more to VVOLs. And only Tintri would be able to provide customers a choice of deploying VMs either with VVOLs or with an open implementation like NFS without compromising on functionality.

How will we maintain this growth, as the base gets bigger?

Tintri has a solid roadmap. Our focus on Virtualization allows us to bring better functionality around Virtualization & Cloud, much faster to the market. We don’t have to try to be good for all workloads as other General Purpose Platforms and in process compromise everywhere. We are the best Storage Platform for Virtualization & Cloud and would continue to be so for a long time.

If you want to experience all-flash performance for all your virtualized applications (not just the critical ones) at 1/4th of the cost of an over-provisioned flash array (through over-sold dedup and compression) with all the analytics, auto-tuning, VM granular goodness and isolation that our customers are experiencing, I would encourage you to contact your Tintri Rep or your preferred partner. If you plan to attend VMworld, we are going to have big presence there. You can book a Private meeting with Tintri or attend one of our sessions. See details here.

Cheers..

@storarch

Is VVol the solution to VM awareness ??

VVols is ‘THE’ solution to VM awareness for many.

Yes, we have been waiting for it for a long time now and we are still unsure about its whereabouts.

For those of you who want to understand why there is a need for VM awareness, there are a lot of blogs on this topic by some of the best in the industry. Stephen Foskett covered it in three parts – Part1, Part2, Part 3. Tintri has a great infographic explaining it on a page here and in a blog post here .

VVols is bringing in a new type of model which basically helps one define policies and data services at a VM level, getting granular than the current model used by traditional storage devices, which is at a Volume/LUN level and at the same time preventing the IO Blender situation to an extent.

In my opinion, adding VVols to vSphere is a great step by VMware but it is definitely only a small part of the solution. In fact, I think it is just an enabler and there is a lot that is needed at the underlying storage level to make it an ideal VM aware storage. Let’s dig more into this.

Continue reading