The industry is validating Tintri – Another one comes through

Last few weeks have been great in terms of industry recognition of how Tintri has been approaching the storage problem for virtualized workloads.

First, VVOLs go GA and validates the approach Tintri took 7 years back with VMstore in terms of removing the boundaries around LUNs and Volumes in virtualized environments and come out with a product that delivered VM centric Storage Platform around 4 years back. The result is 4 years of product maturity (and 4 years of lead) based on real world deployments.

Now we have Pure Storage announce an integration with VMTurbo that allows customers to use VMTurbo in combination with Pure Storage to automate the movement of VMs from one LUN to the other based on various conditions including performance and latency.

What does this tell us?

Continue reading

FY16 , VVols and Tintri’s Financial Differentiation

vSphere 6.0 is GA and Tintri announced support for vSphere 6.0, VVols and VMware Integrated Openstack along with a plugin for vRealize Operations (vROPs) at PEX this week. We also finished our FY in January and will be off to our Sales Kick Off next week. FY15 was a great year with tremendous growth and record QTRs for Tintri. Tintri continues to lead the way with a product designed from the grounds up for both Flash and Virtualization. With vSphere 6.0 we would bring all the goodness that customer’s love about VMstore to VVols including some of the key differentiators that would separate us from the pack –

  • 99-100% IO from flash
  • VM granular operations
  • VM granular visibility and latency visualization
  • Per VM/VVol analytics
  • Automatics Per VM partitioning of Storage resources (Performance Reserves) based on the analytics
  • QoS and Performance Fair share at a VM level (there are a lot more exciting things coming in this space. Stay Tuned for an update on this….)
  • Latency Visualization across the infrastructure (Host, Network, Storage). We are going to add more to this in the coming weeks … stay tuned
  • 1M VVols per VMstore
    • With VVols implementation, a VM may need as low as 5 VVols and as high as 100s of VVols (with snapshot, clones etc.) and a 1000 VM install would require 10s of Thousands to 100s of thousands VVols.
  • VM Granular Automation
  • Ability to Manage, Monitor and analyze up to 112,000 VMs from a single pane of glass

The VVol race that the storage Vendors are starting now was won by Tintri four years back. If you would be evaluating VVols in the coming weeks/months, you should definitely read this blog here to understand what you should ask from your storage vendor when it comes to VVols.

Tintri from the beginning focused on using software to drive innovation and one of the key differentiators about the technology is its ability to deliver 99-100% IO from flash, which is driven by our software unlike all-flash vendors that use Brute Force to deliver performance. The advantage is that Tintri can address a broad spectrum of workloads at a very low cost.

Workload Breakup

What it means is that unlike an All-Flash solution where $/IOP is low and $/GB is high or a hybrid solution where $/IOP is high and $/GB is low, Tintri brings both $/GB and $/IOP at low levels without over depending on Space Savings (Dedup & Compression), therefore delivering a better $/workload at a very high density.

$/GB

Our focus on Virtualization continues to help us differentiate and bring new Virtualization & Cloud Centric functionality faster to the market. The result is a platform that is 5-10x cheaper on CAPEX, 60x cheaper on OPEX and highly scalable.

Cheers..

@storarch

Simplifying Storage Chargeback/Showback with Tintri – Part 3 (Performance)

In the Part1 of this series, I covered the challenges around Storage Chargeback and Showback in Virtualized environments running on traditional storage platforms that use LUN/Volume Abstraction Layers and in Part2, I covered how Tintri simplifies and brings more accuracy to the Capacity Centric Model with its VM centric design.

In this post I will cover how Tintri makes it easy to incorporate a more accurate Performance based model into Chargeback/Showback that can be used in combination with the Capacity Centric Model.

As I stated in my first post – although everyone would like to add a performance centric model to Chargeback/Showback, it is not that popular given the complexities around implementing something like that on the storage side in a virtualized environment (refer my first post in the series).

As we all know, one of the big advantages (and differentiator) that Tintri has is that its abstraction layer is a vDisk (VMDK, VHD etc.) and not a LUN or a Volume. This allows it to see at the right level of abstraction (rather than looking at a LUN or a Volume). The other advantage that it has is that because of its tight API integration with various hypervisors, it sees a vDisk as a vDisk and not as another file. This ability allows it to create an IO Profile of every vDisk in the system. This IO profile gives Tintri an understanding of the type of IO taking place in side a vDisk (Random, Sequential, %Reads, %Writes, Block Size) based on which it assigns ‘Performance Reserves‘ to every vDisk. The Performance Reserves are a combination various resources in the storage platform – like Flash, CPU shares, Network buffers etc. and are shown by the Tintri Management interface in % values.

So if we consider a Database (DB) VM with C: Drive, D: Drive (DB files) and E: Drive (Logs), Tintri would look at those individual drives, learn about the IO profiles (say D: is random IO, E: is sequential and C: not much IO) and then automatically assigns Performance Reserves to these with a goal to deliver 99-10% IO from flash and sub-ms response times. The first benefit (as I have discussed in my other posts) of this approach is that Tintri automatically tunes itself unlike some other storage products that just show some values (most of them at a LUN/Volume level) and require the admin to take action – like add/buy more flash, reshuffle the VMs, reduce CPU/cache load etc. The second benefit, the one that we will discuss here in more detail, is that IT/Service Providers now have a metric against which they can do a Chargeback/Showback.

Performance Reserves are independent of IOPs and are a measure of how much Storage resources are consumed (or will be consumed) by the vDisk in order to get 99-100% IO from flash and sub-ms response times. Traditional Performance centric models that use IOPs as the measure don’t take into consideration the amount of Storage resources consumed by the vDisk to do for a particular type/size of IO.

The problem here is the traditional underlying storage architecture that continues to use LUNs/Volumes as the abstraction layer. These Storage products in some cases don’t have an ability to provide a Quality of Service to even those abstraction layers, let alone looking at individual vDisks and automatically tuning various storage resources for a workload.

Here we look at a simple example of how three different VMs having different type of IO characteristics get their Performance Reserves assigned by Tintri.

The VM ss_testld_1 in the first screenshot is doing 2,751 IOPs (Ave. 8K block size) with 90% Reads and has 5.2% Performance Reserves allocated to it.

Screenshot-1 - VM ss_testld_1

The VM ss_testld_2 in the second screenshot is doing 1,112 IOPs (Ave. 81.6K block size) with 90% Reads and has 10.8% Performance Reserves allocated to it.

Screenshot-2 - VM ss_testld_2

The VM ss_testld_3 in the third screenshot is doing 1,458 IOPs (Ave. 8K block size) with 90% Writes and has 4.6% Performance Reserves allocated to it.

Screenshot-3 - VM ss_testld_3

So what we do we see here?

The VM ss_testld_1 is doing 90% reads just like the VM ss_testld_2 and is doing almost 2.5x IOPs (2,751) than the VM ss_testld_2 (1,112) but still has lower Performance Reserves/Footprint (5.2% Vs 10.8%) because it has a much smaller block size (Ave. 8K Vs Ave. 81.6K)

So, if we look at just IOPs, the cost of running ss_testld_1 seems higher but in reality the cost of running ss_testld_2 is more than double that of ss_testld_1.

In the same way the VM ss_testld_1 is doing almost double the IOPs (2,751) than the VM ss_testld_3 (1,458) with the same block size (Ave. 8K) but they are using almost similar performance reserves 5.2% and 4.6%) because the latter is heavy on write (90% Writes).

Here if we look at just the IOPs, the cost of running ss_testld_1 again seems higher but in reality the cost of running ss_testld_3 is almost the same as ss_testld_1 even with ss_testld_3 doing half the IOPs.

Taking a very simplistic example for Chargeback/Showback, if we consider the cost of 100% Reserves at say $100,000 based on cost to acquire/install/support Tintri

  • ss_testld_1 would cost around $5.2K to run
  • ss_testld_2 would cost around $10.8K to run
  • ss_testld_3 would cost around $4.6K to run

If we had taken a IOPs centric model, the costs would have been completely different with no relation to the amount of storage resources consumed to run a particular type of workload.

The other big challenge with the IOPs centric model is its unpredictability. Let’s say we sized a storage platform for X IOPs (say 8K) with some read:write, rand:seq mix and then priced per IOP accordingly. What if the platform doesn’t deliver those IOPs (say it delivers Y IOPs) because our assumptions were wrong or because of completely unpredictable workloads. The difference (X-Y) now has to be absorbed somewhere. So to cover that, IT/Service Provider would charge more next time and become uncompetitive.

The Performance Reserve metric is independent of the IOPs and can be combined with a Capacity Centric Model (for capacity hungry VMs) to give a more accurate Chargeback/Showback model.

The cool thing about all of Tintri’s metrics is that they are all exposed through our REST APIs and Powershell Integration. Therefore these can plug into any customized model as well, giving a more predictable, simplified and accurate Chargeback/Showback Model.

Thanks for reading…

Cheers.

@storarch

Simplifying Storage Chargeback/Showback with Tintri – Part 2 (Capacity)

In the first post of this three part series, we discussed the challenges around Storage Chargeback and Showback in traditional environments. This post will focus on the Capacity based Model for Chargeback/Showback and how Tintri brings in more accuracy and value add to the model.

As we all know (by now – refer my other blog posts), Tintri doesn’t use a LUN/Volume abstraction layer like traditional storage platforms. LUNs/Volumes were designed for physical environments and we just continued to use them for virtualized environments since there was no innovation done by storage vendors specifically for virtualized deployments (until now). Tintri uses vDisks (think VMDKs, VVOLS, VHDs etc.) as the abstraction layer in the storage platform. Using vDisks as the abstraction layer, allows it to see at the right level of abstraction without the added complexity or layers of LUNs/Volumes. What this means to a Service Provider (Internal IT or Public) is that now they can not just look at what is provisioned to a VM or used inside the VM but the overall Capacity Footprint of a VM. The overall Capacity Footprint of a VM not just consists of the Live Data but also space used for other things like Data Protection.

In the example below, I have highlighted the VM Demo-A. As you can see that in the Tintri GUI, you not only see the provisioned space but also the actual used space by the VM in the Used GiB column. Double clicking the VM shows us various graphs for the VM and in this case I have selected the ‘Space’ Graph that shows us exactly how a 50GiB VM is using 134.5GiBs. We breakdown the 134.5 GiBs into Live Data, Hypervisor Snapshots and Tintri Snapshots to give a complete picture of the Capacity Footprint of the VM.

Capacity Showback

Tintri GUI Screenshot showing the Capacity Footprint of a VM

This is not only important because it allows the IT/Service Provider to charge for the right amount of storage consumed by the tenant, therefore increasing the accuracy and predictability around Storage consumption but also because now the IT/Service Provider can provide more insight and value add.

Chargeback/Showback models can be complex. Here we take a very simplistic example –

  • If we consider the cost of 1 GB of storage as $3
    • With a traditional storage the chargeback would be 50GB x $3 = $150
    • Whereas the actual cost of the VM Demo-A is 134.5 GB x $3 = $403.5
      • So either the Service Provider is taking a hit here or is including this cost as a buffer in the overall costing per GB, making it less competitive
    • As a value add, IT/Service Provider can show the various buckets in which the capacity is being utilized in order for the Tenant to reduce its cost
      • So in case of the VM Demo-A the breakup would be –
        • Cost of Storing Live Data – 45GB x $3 = $135
        • Cost of Data Protection – $3 x (9.29GB+80GB) = $267.87
      • With this information in hand, the tenant can take a decision on deleting some of the older snapshot copies to reduce the cost of running the VM from a capacity standpoint

As we can see in this example, Tintri brings in more simplicity and accuracy with a Capacity based chargeback/showback model . In the next post, I will discuss the performance based model and how Tintri can help with implementing something that is potentially more accurate than a typical IOPs based model.

Thanks for reading.

Cheers..

@storarch

Simplifying Storage Chargeback/Showback with Tintri- Part 1

Enterprises today are moving towards a Private Cloud based model for running their IT Infrastructure or they are looking at Public Cloud for some of their select workloads. One of the key requirements of deploying a Private Cloud or consuming the public cloud is Chargeback or Showback.chargeback

As we know, with Chargeback, the IT Department/Service Provider hands over a formal bill to the Line of Businesses (LOB)/Customer to recover costs of delivering the service to them whereas with Showback, there is no exchange of any Bills or money.With Showback, IT just tries to introduce a culture of Cost Awareness, Cost Justification, Capacity Planning and Awareness. The model chosen by different enterprises depends on policies and objectives that different Enterprises have.

Chargeback/Showback – The Storage Challenge

Both Chargeback and Showback require some definitive metrics at various levels based on which the overall cost can be measured for delivering a service. Here the discussion will focus on the Data Storage side of the equation.

Continue reading

Why Tintri’s milestone of 200,000 VMs and 16PB of Storage is bigger than you think?

Yesterday we announced a significant milestone of 200,000 VMs deployed on Tintri with close to 16PB addressed by these VMs.

Now, these numbers may not look big on their own, but they become really interesting when you combine it with the fact we have doubled the number of VMs in last 6 months and that capacity numbers are for VMs experiencing sub-ms performance. All of this has come at the cost of incumbent storage providers where Tintri beat Modern & Traditional Storage Vendors in process.

Tintri brought a revolutionary Storage platform in 2011 designed for Flash just to run VMs and vDisks. The idea was to have a storage platform that was designed with Virtualization in mind and not Physical Workloads. To date, Tintri VMstore is the only external, best-of-breed platform that has laser focus on Virtualization. The General Purpose Platforms (Traditional & Modern, Flash & Hybrid) continue to focus on Physical workloads even as the customers are moving towards 100% virtualization.=

Where Tintri continues to have edge across multiple Hypervisors is the vDisk level visibility of workloads as compared to General Purpose Platforms that have no idea about what happens at the Hypervisor layer.

One of the drawbacks of this is the IO blender effect that everyone talks about with no effort to provide any solution except throwing flash at it. Tintri is the only platform that addresses it, delivering high performance, sub-ms response times, vDisk granular isolation, predictability, analytics across the infrastructure and per VM data services.

VMware VVOLs is the step in the right direction but it is just an enabler for one hypervisor and it’ll depend upon a vendor on what functionalities it brings to the storage platforms. With all our experience in managing and understanding VMs, Tintri is committed to bring all the goodness that the customers have experienced on the VMstore and much more to VVOLs. And only Tintri would be able to provide customers a choice of deploying VMs either with VVOLs or with an open implementation like NFS without compromising on functionality.

How will we maintain this growth, as the base gets bigger?

Tintri has a solid roadmap. Our focus on Virtualization allows us to bring better functionality around Virtualization & Cloud, much faster to the market. We don’t have to try to be good for all workloads as other General Purpose Platforms and in process compromise everywhere. We are the best Storage Platform for Virtualization & Cloud and would continue to be so for a long time.

If you want to experience all-flash performance for all your virtualized applications (not just the critical ones) at 1/4th of the cost of an over-provisioned flash array (through over-sold dedup and compression) with all the analytics, auto-tuning, VM granular goodness and isolation that our customers are experiencing, I would encourage you to contact your Tintri Rep or your preferred partner. If you plan to attend VMworld, we are going to have big presence there. You can book a Private meeting with Tintri or attend one of our sessions. See details here.

Cheers..

@storarch

Converged Infrastructures – Trying to Cure the Symptoms not the Disease

Converged Infrastructures (CIs) and Reference Architectures like vBlock from VCE, VSPEX from EMC and FlexPod from NetApp have seen a lot of growth in the last few years. The growth did not come as a surprise to anyone given the perception around CIs solving some of the big IT pain points around Sizing, Architecting, Standardizing and Faster time to market.

The big benefit that Vendors promised with CIs was that it would reduce time required for making infrastructure ready to provide service by reducing or eliminating the time needed in various phases like Architect & Size, Detail Design, Deployment and Test.

Reality

But what was the real need for CIs and reference architectures?

Continue reading