FY16 , VVols and Tintri’s Financial Differentiation

vSphere 6.0 is GA and Tintri announced support for vSphere 6.0, VVols and VMware Integrated Openstack along with a plugin for vRealize Operations (vROPs) at PEX this week. We also finished our FY in January and will be off to our Sales Kick Off next week. FY15 was a great year with tremendous growth and record QTRs for Tintri. Tintri continues to lead the way with a product designed from the grounds up for both Flash and Virtualization. With vSphere 6.0 we would bring all the goodness that customer’s love about VMstore to VVols including some of the key differentiators that would separate us from the pack –

  • 99-100% IO from flash
  • VM granular operations
  • VM granular visibility and latency visualization
  • Per VM/VVol analytics
  • Automatics Per VM partitioning of Storage resources (Performance Reserves) based on the analytics
  • QoS and Performance Fair share at a VM level (there are a lot more exciting things coming in this space. Stay Tuned for an update on this….)
  • Latency Visualization across the infrastructure (Host, Network, Storage). We are going to add more to this in the coming weeks … stay tuned
  • 1M VVols per VMstore
    • With VVols implementation, a VM may need as low as 5 VVols and as high as 100s of VVols (with snapshot, clones etc.) and a 1000 VM install would require 10s of Thousands to 100s of thousands VVols.
  • VM Granular Automation
  • Ability to Manage, Monitor and analyze up to 112,000 VMs from a single pane of glass

The VVol race that the storage Vendors are starting now was won by Tintri four years back. If you would be evaluating VVols in the coming weeks/months, you should definitely read this blog here to understand what you should ask from your storage vendor when it comes to VVols.

Tintri from the beginning focused on using software to drive innovation and one of the key differentiators about the technology is its ability to deliver 99-100% IO from flash, which is driven by our software unlike all-flash vendors that use Brute Force to deliver performance. The advantage is that Tintri can address a broad spectrum of workloads at a very low cost.

Workload Breakup

What it means is that unlike an All-Flash solution where $/IOP is low and $/GB is high or a hybrid solution where $/IOP is high and $/GB is low, Tintri brings both $/GB and $/IOP at low levels without over depending on Space Savings (Dedup & Compression), therefore delivering a better $/workload at a very high density.

$/GB

Our focus on Virtualization continues to help us differentiate and bring new Virtualization & Cloud Centric functionality faster to the market. The result is a platform that is 5-10x cheaper on CAPEX, 60x cheaper on OPEX and highly scalable.

Cheers..

@storarch

Simplifying Storage Chargeback/Showback with Tintri – Part 3 (Performance)

In the Part1 of this series, I covered the challenges around Storage Chargeback and Showback in Virtualized environments running on traditional storage platforms that use LUN/Volume Abstraction Layers and in Part2, I covered how Tintri simplifies and brings more accuracy to the Capacity Centric Model with its VM centric design.

In this post I will cover how Tintri makes it easy to incorporate a more accurate Performance based model into Chargeback/Showback that can be used in combination with the Capacity Centric Model.

As I stated in my first post – although everyone would like to add a performance centric model to Chargeback/Showback, it is not that popular given the complexities around implementing something like that on the storage side in a virtualized environment (refer my first post in the series).

As we all know, one of the big advantages (and differentiator) that Tintri has is that its abstraction layer is a vDisk (VMDK, VHD etc.) and not a LUN or a Volume. This allows it to see at the right level of abstraction (rather than looking at a LUN or a Volume). The other advantage that it has is that because of its tight API integration with various hypervisors, it sees a vDisk as a vDisk and not as another file. This ability allows it to create an IO Profile of every vDisk in the system. This IO profile gives Tintri an understanding of the type of IO taking place in side a vDisk (Random, Sequential, %Reads, %Writes, Block Size) based on which it assigns ‘Performance Reserves‘ to every vDisk. The Performance Reserves are a combination various resources in the storage platform – like Flash, CPU shares, Network buffers etc. and are shown by the Tintri Management interface in % values.

So if we consider a Database (DB) VM with C: Drive, D: Drive (DB files) and E: Drive (Logs), Tintri would look at those individual drives, learn about the IO profiles (say D: is random IO, E: is sequential and C: not much IO) and then automatically assigns Performance Reserves to these with a goal to deliver 99-10% IO from flash and sub-ms response times. The first benefit (as I have discussed in my other posts) of this approach is that Tintri automatically tunes itself unlike some other storage products that just show some values (most of them at a LUN/Volume level) and require the admin to take action – like add/buy more flash, reshuffle the VMs, reduce CPU/cache load etc. The second benefit, the one that we will discuss here in more detail, is that IT/Service Providers now have a metric against which they can do a Chargeback/Showback.

Performance Reserves are independent of IOPs and are a measure of how much Storage resources are consumed (or will be consumed) by the vDisk in order to get 99-100% IO from flash and sub-ms response times. Traditional Performance centric models that use IOPs as the measure don’t take into consideration the amount of Storage resources consumed by the vDisk to do for a particular type/size of IO.

The problem here is the traditional underlying storage architecture that continues to use LUNs/Volumes as the abstraction layer. These Storage products in some cases don’t have an ability to provide a Quality of Service to even those abstraction layers, let alone looking at individual vDisks and automatically tuning various storage resources for a workload.

Here we look at a simple example of how three different VMs having different type of IO characteristics get their Performance Reserves assigned by Tintri.

The VM ss_testld_1 in the first screenshot is doing 2,751 IOPs (Ave. 8K block size) with 90% Reads and has 5.2% Performance Reserves allocated to it.

Screenshot-1 - VM ss_testld_1

The VM ss_testld_2 in the second screenshot is doing 1,112 IOPs (Ave. 81.6K block size) with 90% Reads and has 10.8% Performance Reserves allocated to it.

Screenshot-2 - VM ss_testld_2

The VM ss_testld_3 in the third screenshot is doing 1,458 IOPs (Ave. 8K block size) with 90% Writes and has 4.6% Performance Reserves allocated to it.

Screenshot-3 - VM ss_testld_3

So what we do we see here?

The VM ss_testld_1 is doing 90% reads just like the VM ss_testld_2 and is doing almost 2.5x IOPs (2,751) than the VM ss_testld_2 (1,112) but still has lower Performance Reserves/Footprint (5.2% Vs 10.8%) because it has a much smaller block size (Ave. 8K Vs Ave. 81.6K)

So, if we look at just IOPs, the cost of running ss_testld_1 seems higher but in reality the cost of running ss_testld_2 is more than double that of ss_testld_1.

In the same way the VM ss_testld_1 is doing almost double the IOPs (2,751) than the VM ss_testld_3 (1,458) with the same block size (Ave. 8K) but they are using almost similar performance reserves 5.2% and 4.6%) because the latter is heavy on write (90% Writes).

Here if we look at just the IOPs, the cost of running ss_testld_1 again seems higher but in reality the cost of running ss_testld_3 is almost the same as ss_testld_1 even with ss_testld_3 doing half the IOPs.

Taking a very simplistic example for Chargeback/Showback, if we consider the cost of 100% Reserves at say $100,000 based on cost to acquire/install/support Tintri

  • ss_testld_1 would cost around $5.2K to run
  • ss_testld_2 would cost around $10.8K to run
  • ss_testld_3 would cost around $4.6K to run

If we had taken a IOPs centric model, the costs would have been completely different with no relation to the amount of storage resources consumed to run a particular type of workload.

The other big challenge with the IOPs centric model is its unpredictability. Let’s say we sized a storage platform for X IOPs (say 8K) with some read:write, rand:seq mix and then priced per IOP accordingly. What if the platform doesn’t deliver those IOPs (say it delivers Y IOPs) because our assumptions were wrong or because of completely unpredictable workloads. The difference (X-Y) now has to be absorbed somewhere. So to cover that, IT/Service Provider would charge more next time and become uncompetitive.

The Performance Reserve metric is independent of the IOPs and can be combined with a Capacity Centric Model (for capacity hungry VMs) to give a more accurate Chargeback/Showback model.

The cool thing about all of Tintri’s metrics is that they are all exposed through our REST APIs and Powershell Integration. Therefore these can plug into any customized model as well, giving a more predictable, simplified and accurate Chargeback/Showback Model.

Thanks for reading…

Cheers.

@storarch

Why Tintri’s milestone of 200,000 VMs and 16PB of Storage is bigger than you think?

Yesterday we announced a significant milestone of 200,000 VMs deployed on Tintri with close to 16PB addressed by these VMs.

Now, these numbers may not look big on their own, but they become really interesting when you combine it with the fact we have doubled the number of VMs in last 6 months and that capacity numbers are for VMs experiencing sub-ms performance. All of this has come at the cost of incumbent storage providers where Tintri beat Modern & Traditional Storage Vendors in process.

Tintri brought a revolutionary Storage platform in 2011 designed for Flash just to run VMs and vDisks. The idea was to have a storage platform that was designed with Virtualization in mind and not Physical Workloads. To date, Tintri VMstore is the only external, best-of-breed platform that has laser focus on Virtualization. The General Purpose Platforms (Traditional & Modern, Flash & Hybrid) continue to focus on Physical workloads even as the customers are moving towards 100% virtualization.=

Where Tintri continues to have edge across multiple Hypervisors is the vDisk level visibility of workloads as compared to General Purpose Platforms that have no idea about what happens at the Hypervisor layer.

One of the drawbacks of this is the IO blender effect that everyone talks about with no effort to provide any solution except throwing flash at it. Tintri is the only platform that addresses it, delivering high performance, sub-ms response times, vDisk granular isolation, predictability, analytics across the infrastructure and per VM data services.

VMware VVOLs is the step in the right direction but it is just an enabler for one hypervisor and it’ll depend upon a vendor on what functionalities it brings to the storage platforms. With all our experience in managing and understanding VMs, Tintri is committed to bring all the goodness that the customers have experienced on the VMstore and much more to VVOLs. And only Tintri would be able to provide customers a choice of deploying VMs either with VVOLs or with an open implementation like NFS without compromising on functionality.

How will we maintain this growth, as the base gets bigger?

Tintri has a solid roadmap. Our focus on Virtualization allows us to bring better functionality around Virtualization & Cloud, much faster to the market. We don’t have to try to be good for all workloads as other General Purpose Platforms and in process compromise everywhere. We are the best Storage Platform for Virtualization & Cloud and would continue to be so for a long time.

If you want to experience all-flash performance for all your virtualized applications (not just the critical ones) at 1/4th of the cost of an over-provisioned flash array (through over-sold dedup and compression) with all the analytics, auto-tuning, VM granular goodness and isolation that our customers are experiencing, I would encourage you to contact your Tintri Rep or your preferred partner. If you plan to attend VMworld, we are going to have big presence there. You can book a Private meeting with Tintri or attend one of our sessions. See details here.

Cheers..

@storarch

Is VVol the solution to VM awareness ??

VVols is ‘THE’ solution to VM awareness for many.

Yes, we have been waiting for it for a long time now and we are still unsure about its whereabouts.

For those of you who want to understand why there is a need for VM awareness, there are a lot of blogs on this topic by some of the best in the industry. Stephen Foskett covered it in three parts – Part1, Part2, Part 3. Tintri has a great infographic explaining it on a page here and in a blog post here .

VVols is bringing in a new type of model which basically helps one define policies and data services at a VM level, getting granular than the current model used by traditional storage devices, which is at a Volume/LUN level and at the same time preventing the IO Blender situation to an extent.

In my opinion, adding VVols to vSphere is a great step by VMware but it is definitely only a small part of the solution. In fact, I think it is just an enabler and there is a lot that is needed at the underlying storage level to make it an ideal VM aware storage. Let’s dig more into this.

Continue reading

The Dilemma of Evaluating Storage for VMware

As a Consulting Systems Engineer focused on Virtualization, I get to meet a lot of Customers and Prospects that are evaluating storage solution for Virtualization, Cloud, Business Critical Applications etc. A lot of times, with so many options available in the Storage Industry with confusingly similar messages, this is how a Technology Evaluator looks at things.

Is this how you see it too? Believe it or not, this is the situation that some vendors work towards!

As we know, the devil is in the details. When a Technical Evaluator finds himself in this situation, there are specific questions that he/she can ask each vendor to get a clearer picture. I have tried to put a list below without a specific order.

Continue reading

Back to the Future with Software Defined Storage

After every few years our industry comes up with a new buzzword that takes control over Vendor Collateral, Webpages, Blogs, Tweets etc. We had Virtualization, then we had Cloud and now it is Software Defined Datacenter. From NetApp’s perspective none of this was new. NetApp started Virtualization at the storage/data block level way back in 1992. A lot of major innovations came in 2000s when technologies like V-Series, FlexVols, RAID-DP, Thin Provisioning, Writeable Clones, Deduplication, File/Sparse File level clones and FlashCache were brought to the market by NetApp (To see a summary of innovations by NetApp see this page here). When VMware started to grow rapidly, NetApp followed suit as it was ahead of the curve and ready for growth in Virtual Environments.

Continue reading

VMware VAAI and Sub-LUN Clones – A hidden gem

NetApp Data ONTAP (DOT) 8.1 introduced a lot of new functionalities and optimizations. For me, one of the best ones is introduction of sub-lun cloning when used in conjunction with vStorage APIs for Array Integration (VAAI) for SAN in environments running vSphere 4.1 or higher. The same functionality is available even for Hyper-V through our Power shell and System Center Orchestrator integration.
Originally in case of LUNs, NetApp Virtual Storage Console used the standard copy within the LUN followed by LUN level cloning to achieve fast speeds and space efficiency. Starting 8.0.1, it started to accelerate the cloning with in the LUN using copy offload primitive in VAAI. With 8.1 it took a step further and started utilizing Sub-LUN cloning in environments where both VAAI and FlexClone are enabled to make sure that cloning with in the LUN is space efficient as well.

So what is a Sub-LUN Clone?

Continue reading