What’s New in All-Flash?

Today, Tintri announced the Tintri VMstore T5000 All-Flash series—the world’s first all-flash storage system that lets you work at the VM level—leading a launch that includes Tintri OS 4.0, Tintri Global Center 2.1 and VMstack, Tintri’s partner-led converged stack. Since its inception in 2008, Tintri has delivered differentiated and innovative features and products for next-generation virtualized datacenters. And we’re continuing the trend with the game-changing All-Flash VM-Aware Storage (VAS).

Other all-flash vendors claim all-flash can be a solution for all workloads—a case of “if all you have is a hammer then everything looks like a nail.” Or, they’ll argue that all-flash can augment hybrid deployments, with the ability to pin or move entire LUNs and volumes.

launchtimeline_tintri

But not all workloads in a LUN or volume may have the same needs for flash, performance and latency. So just as we’ve reinvented storage over the past four years, Tintri’s ready to reinvent all-flash. Here’s how:

  • No LUNs. Continuing the Tintri tradition, the T5000 series eliminates LUNs and volumes, letting you focus on applications. We’re welcoming VMs to the all-flash space across multiple hypervisors.
  • Unified management. Aside from standalone installations, the T5000 series can also augment the T800, and vice-versa. Admins can now manage VMs across hybrid-flash and all-flash platforms in a single pool through Tintri Global Center (TGC), with full integration.
  • Fully automated policy-based infrastructure through TGC, with support from vDisk-granular analytics and VM-granular self-managed service groups.

With access to vDisk-granular historical performance data, SLAs and detailed latency information, customers can decide which workloads can benefit from all-flash vs hybrid-flash—especially when our hybrid-flash delivers 99-100% from flash.

But we hear you, storage admins: you want to go into the weeds. Surprise—we’re happy to help. Here’s what else the T5000 series can offer you:

  • Space savings from inline dedupe, compression, cloning and thin provisioning.
  • NVDIMMs, NTB, 10G and more of the latest hardware advancements.
  • Enterprise reliability exceeding 99.999% uptime.
  • Scale of up to 112,000 VMs, 2.3PB and up to 5.4M IOPs (random 60:40 R:W, 8K) in a single TGC implementation.  (These are real-life numbers, not 100% read numbers.)
  • VM-granular snapshots, cloning and replication.
  • vDisk-level Dynamic QoS to eliminate noisy neighbors and ensure peak performance.
  • VM-level Manual QoS to setup performance SLAs through Min and Max IOPs.
  • vDisk (VMDKs, VHDs)-level data synchronization across VMs for test and dev or any operations requiring periodic copying of data.
  • VM-level replication, backup and transfer between Hybrid Flash and All-Flash systems.
  • VM-granular performance analytics with end-to-end latency visualization that includes host, network, storage, contention and throttle latency.

Today, Tintri continues our solid roadmap of business-relevant innovations in storage for modern workloads. We changed the game for hybrid-flash—and we’re doing it again for all-flash.

Cheers,
@storarch

Why all the modern Storage QoS Implementations are not good enough?

Storage QoS is starting to become a key functionality today. It is mainly driven by a drive towards increased resource utilization and therefore resource sharing. In the past, Storage would have dedicated RAID group for LUNs and the applications would have LUNs dedicated to it. This worked really well in terms of guaranteed IOPs and Isolation. As disk sizes increased and storage technology matured, we started sharing the drives amongst the LUNs and these LUNs were no longer Isolated from each other. Virtualization made the situation even worse because not only the disks got shared amongst LUNs, the LUNs themselves got shared by multiple workloads (VMs). This resulted in Noisy Neighbor problems that impacted these LUNs and Volumes based storage systems.

A few Storage Vendors have some form of manual QoS functionality built-in the storage OS. Tintri for example, from the beginning, built an architecture that enabled an Always ON, fully automatic, dynamic QoS at an individual vDisk level (think VMDKs, VHDs etc.) that ensured automatic storage resource reservation at the vDisk level (based on our built-in IO analytics engine) so that every vDisk gets the performance it needs at sub-ms response time. The architecture is designed such that a new vDisk gets its performance from free reserves available in the system. So, at no point another vDisk that needs more performance impacts an existing vDisk. The approach is different from traditional approach where one manually sets up QoS at a LUN/Volume level but is highly effective for IT organizations that don’t want to hand hold the storage system. Tintri is the only storage product out there that has an Always ON, dynamic QoS enabled within all its storage appliances.

Screen Shot 2015-04-06 at 4.42.02 PM

Having said that, setting up QoS manually does have its play in Service Provider (SP) Space as well as some Private Cloud implementations where the SP (Public or Private) doesn’t want to give everyone unlimited performance. These SPs want to be able to sell let’s say Platinum Service to their customers and do it dynamically on the fly without even moving the workload. So, coming back to my original point about QoS implemented by Storage Vendors today. Here are the reasons why I say they are not good enough –

Granularity Challenge

In today’s datacenters, workloads are Virtual and Clouds are not implemented without Virtualization. In these Virtualization enabled datacenters, dealing with LUNs/Volumes is a pain. LUNs were brought into the industry 30-40 years back, when the workloads were physical and we started using these LUNs/Volumes even with Virtualized workloads because thats what the storage systems knew. In a virtual environment, a LUN has multiple workloads running in it. Implementing QoS on LUNs has no advantage for virtual workloads whether it is being implemented for isolation or chargeback.  VVols would change it (only for vSphere) but there is still a long way to go there as VVols don’t support all the vSphere features and not all vendors have a practically deployable implementation.

The result is that VMs in a LUN/Volume end up sharing the IOPs limits set up at the LUN/Volume level and therefore end up interfering with each other.

The IOPs Dilemma

Storage QoS is implemented by IOPs. One can combine it with MB/s but only a few vendors allow you to do that. Usually, it is just one or the other.

Now here is the problem, IOPs can have different meaning based on the block size. If I am limiting a LUN/Volume to a 1000 IOPs here is what it could mean –

4K Block size means 4MB/s

8K Block size means 8MB/s

64K Block size means 64MB/s

The same 1000 IOPs can mean 16x more load on a system when looking at 64K block size Vs 4K. That is a lot of difference for a Service Provider to take into account when deciding the pricing for a service. In some cases even large number of small block size IOs may impact storage more than large block IOs. Now some vendors can combine the IOPs limit with throughput to get around this to some extent but ideally service providers want one unit to bill against and want a single scale to measure everyone. Microsoft’s implementation of Normalized IOPs is a great example of such a metric.

The Throttling Effect

Some Storage systems using QoS on LUNs have this problem to deal with specifically when it comes to hosts that have more than one LUN coming from a storage system mapped through a HBA. When one implements QoS limits on a specific LUN and that LUN tries to go above that limit, it gets throttled by the storage system. The IOs get queued up at the HBA level and at that point the host starts to throttle the IO to the LUN and it does that for not just the LUN in question but to all the LUNs coming from that storage, thinking that storage system is not able to take the load that it is trying to send. This makes it practically impossible to implement QoS at an individual LUN level without impacting other LUNs.

QoS_Throttle_fairshare

The Visibility and Analytics Challenge

Most of the storage vendors have QoS more as a check box with a very few real world deployments. The reason is that QoS is really complex to implement and there are more chances of getting it wrong than right. QoS has to be implemented like a strategy and across all workloads. The challenge is that once someone gets it wrong it is not easy to fix and requires involving Vendor Support teams to determine the cause. Some vendors sell Professional Services around this, which makes it a really expensive feature to implement.

The other point being that QoS itself can become the cause of latency either because of the Max Limits setup on a workload or because of the contention resulting from the cumulative Minimum guaranteed IOPs set up on various workloads exceeding the overall performance capability of the storage system. Ideally the storage systems should give more insight into QoS and its impact on various workloads so that if someone complains of latency or drop in performance, the IT team is quickly able to pin point the reason. None of the storage vendors provide advanced user friendly analytics for QoS today and that is one of the biggest inhibitors in terms of real world adoption of QoS.

To summarize, QoS offered by storage vendors today is not granular enough, it doesn’t have a single scale to measure or apply QoS guarantees/ limits, it doesn’t ensure performance fair-share, it doesn’t guarantee isolation and storage providers don’t have the necessary analytics associated with it to make it easy to implement and then troubleshoot QoS related issues. I think its time to address these challenges so that QoS can be widely accepted and implemented in the datacenter.

Cheers..

@storarch

Simplifying Storage Chargeback/Showback with Tintri – Part 3 (Performance)

In the Part1 of this series, I covered the challenges around Storage Chargeback and Showback in Virtualized environments running on traditional storage platforms that use LUN/Volume Abstraction Layers and in Part2, I covered how Tintri simplifies and brings more accuracy to the Capacity Centric Model with its VM centric design.

In this post I will cover how Tintri makes it easy to incorporate a more accurate Performance based model into Chargeback/Showback that can be used in combination with the Capacity Centric Model.

As I stated in my first post – although everyone would like to add a performance centric model to Chargeback/Showback, it is not that popular given the complexities around implementing something like that on the storage side in a virtualized environment (refer my first post in the series).

As we all know, one of the big advantages (and differentiator) that Tintri has is that its abstraction layer is a vDisk (VMDK, VHD etc.) and not a LUN or a Volume. This allows it to see at the right level of abstraction (rather than looking at a LUN or a Volume). The other advantage that it has is that because of its tight API integration with various hypervisors, it sees a vDisk as a vDisk and not as another file. This ability allows it to create an IO Profile of every vDisk in the system. This IO profile gives Tintri an understanding of the type of IO taking place in side a vDisk (Random, Sequential, %Reads, %Writes, Block Size) based on which it assigns ‘Performance Reserves‘ to every vDisk. The Performance Reserves are a combination various resources in the storage platform – like Flash, CPU shares, Network buffers etc. and are shown by the Tintri Management interface in % values.

So if we consider a Database (DB) VM with C: Drive, D: Drive (DB files) and E: Drive (Logs), Tintri would look at those individual drives, learn about the IO profiles (say D: is random IO, E: is sequential and C: not much IO) and then automatically assigns Performance Reserves to these with a goal to deliver 99-10% IO from flash and sub-ms response times. The first benefit (as I have discussed in my other posts) of this approach is that Tintri automatically tunes itself unlike some other storage products that just show some values (most of them at a LUN/Volume level) and require the admin to take action – like add/buy more flash, reshuffle the VMs, reduce CPU/cache load etc. The second benefit, the one that we will discuss here in more detail, is that IT/Service Providers now have a metric against which they can do a Chargeback/Showback.

Performance Reserves are independent of IOPs and are a measure of how much Storage resources are consumed (or will be consumed) by the vDisk in order to get 99-100% IO from flash and sub-ms response times. Traditional Performance centric models that use IOPs as the measure don’t take into consideration the amount of Storage resources consumed by the vDisk to do for a particular type/size of IO.

The problem here is the traditional underlying storage architecture that continues to use LUNs/Volumes as the abstraction layer. These Storage products in some cases don’t have an ability to provide a Quality of Service to even those abstraction layers, let alone looking at individual vDisks and automatically tuning various storage resources for a workload.

Here we look at a simple example of how three different VMs having different type of IO characteristics get their Performance Reserves assigned by Tintri.

The VM ss_testld_1 in the first screenshot is doing 2,751 IOPs (Ave. 8K block size) with 90% Reads and has 5.2% Performance Reserves allocated to it.

Screenshot-1 - VM ss_testld_1

The VM ss_testld_2 in the second screenshot is doing 1,112 IOPs (Ave. 81.6K block size) with 90% Reads and has 10.8% Performance Reserves allocated to it.

Screenshot-2 - VM ss_testld_2

The VM ss_testld_3 in the third screenshot is doing 1,458 IOPs (Ave. 8K block size) with 90% Writes and has 4.6% Performance Reserves allocated to it.

Screenshot-3 - VM ss_testld_3

So what we do we see here?

The VM ss_testld_1 is doing 90% reads just like the VM ss_testld_2 and is doing almost 2.5x IOPs (2,751) than the VM ss_testld_2 (1,112) but still has lower Performance Reserves/Footprint (5.2% Vs 10.8%) because it has a much smaller block size (Ave. 8K Vs Ave. 81.6K)

So, if we look at just IOPs, the cost of running ss_testld_1 seems higher but in reality the cost of running ss_testld_2 is more than double that of ss_testld_1.

In the same way the VM ss_testld_1 is doing almost double the IOPs (2,751) than the VM ss_testld_3 (1,458) with the same block size (Ave. 8K) but they are using almost similar performance reserves 5.2% and 4.6%) because the latter is heavy on write (90% Writes).

Here if we look at just the IOPs, the cost of running ss_testld_1 again seems higher but in reality the cost of running ss_testld_3 is almost the same as ss_testld_1 even with ss_testld_3 doing half the IOPs.

Taking a very simplistic example for Chargeback/Showback, if we consider the cost of 100% Reserves at say $100,000 based on cost to acquire/install/support Tintri

  • ss_testld_1 would cost around $5.2K to run
  • ss_testld_2 would cost around $10.8K to run
  • ss_testld_3 would cost around $4.6K to run

If we had taken a IOPs centric model, the costs would have been completely different with no relation to the amount of storage resources consumed to run a particular type of workload.

The other big challenge with the IOPs centric model is its unpredictability. Let’s say we sized a storage platform for X IOPs (say 8K) with some read:write, rand:seq mix and then priced per IOP accordingly. What if the platform doesn’t deliver those IOPs (say it delivers Y IOPs) because our assumptions were wrong or because of completely unpredictable workloads. The difference (X-Y) now has to be absorbed somewhere. So to cover that, IT/Service Provider would charge more next time and become uncompetitive.

The Performance Reserve metric is independent of the IOPs and can be combined with a Capacity Centric Model (for capacity hungry VMs) to give a more accurate Chargeback/Showback model.

The cool thing about all of Tintri’s metrics is that they are all exposed through our REST APIs and Powershell Integration. Therefore these can plug into any customized model as well, giving a more predictable, simplified and accurate Chargeback/Showback Model.

Thanks for reading…

Cheers.

@storarch

Simplifying Storage Chargeback/Showback with Tintri – Part 2 (Capacity)

In the first post of this three part series, we discussed the challenges around Storage Chargeback and Showback in traditional environments. This post will focus on the Capacity based Model for Chargeback/Showback and how Tintri brings in more accuracy and value add to the model.

As we all know (by now – refer my other blog posts), Tintri doesn’t use a LUN/Volume abstraction layer like traditional storage platforms. LUNs/Volumes were designed for physical environments and we just continued to use them for virtualized environments since there was no innovation done by storage vendors specifically for virtualized deployments (until now). Tintri uses vDisks (think VMDKs, VVOLS, VHDs etc.) as the abstraction layer in the storage platform. Using vDisks as the abstraction layer, allows it to see at the right level of abstraction without the added complexity or layers of LUNs/Volumes. What this means to a Service Provider (Internal IT or Public) is that now they can not just look at what is provisioned to a VM or used inside the VM but the overall Capacity Footprint of a VM. The overall Capacity Footprint of a VM not just consists of the Live Data but also space used for other things like Data Protection.

In the example below, I have highlighted the VM Demo-A. As you can see that in the Tintri GUI, you not only see the provisioned space but also the actual used space by the VM in the Used GiB column. Double clicking the VM shows us various graphs for the VM and in this case I have selected the ‘Space’ Graph that shows us exactly how a 50GiB VM is using 134.5GiBs. We breakdown the 134.5 GiBs into Live Data, Hypervisor Snapshots and Tintri Snapshots to give a complete picture of the Capacity Footprint of the VM.

Capacity Showback

Tintri GUI Screenshot showing the Capacity Footprint of a VM

This is not only important because it allows the IT/Service Provider to charge for the right amount of storage consumed by the tenant, therefore increasing the accuracy and predictability around Storage consumption but also because now the IT/Service Provider can provide more insight and value add.

Chargeback/Showback models can be complex. Here we take a very simplistic example –

  • If we consider the cost of 1 GB of storage as $3
    • With a traditional storage the chargeback would be 50GB x $3 = $150
    • Whereas the actual cost of the VM Demo-A is 134.5 GB x $3 = $403.5
      • So either the Service Provider is taking a hit here or is including this cost as a buffer in the overall costing per GB, making it less competitive
    • As a value add, IT/Service Provider can show the various buckets in which the capacity is being utilized in order for the Tenant to reduce its cost
      • So in case of the VM Demo-A the breakup would be –
        • Cost of Storing Live Data – 45GB x $3 = $135
        • Cost of Data Protection – $3 x (9.29GB+80GB) = $267.87
      • With this information in hand, the tenant can take a decision on deleting some of the older snapshot copies to reduce the cost of running the VM from a capacity standpoint

As we can see in this example, Tintri brings in more simplicity and accuracy with a Capacity based chargeback/showback model . In the next post, I will discuss the performance based model and how Tintri can help with implementing something that is potentially more accurate than a typical IOPs based model.

Thanks for reading.

Cheers..

@storarch

Simplifying Storage Chargeback/Showback with Tintri- Part 1

Enterprises today are moving towards a Private Cloud based model for running their IT Infrastructure or they are looking at Public Cloud for some of their select workloads. One of the key requirements of deploying a Private Cloud or consuming the public cloud is Chargeback or Showback.chargeback

As we know, with Chargeback, the IT Department/Service Provider hands over a formal bill to the Line of Businesses (LOB)/Customer to recover costs of delivering the service to them whereas with Showback, there is no exchange of any Bills or money.With Showback, IT just tries to introduce a culture of Cost Awareness, Cost Justification, Capacity Planning and Awareness. The model chosen by different enterprises depends on policies and objectives that different Enterprises have.

Chargeback/Showback – The Storage Challenge

Both Chargeback and Showback require some definitive metrics at various levels based on which the overall cost can be measured for delivering a service. Here the discussion will focus on the Data Storage side of the equation.

Continue reading

A new beginning ……

Yes, I have taken the plunge and I am on to a new challenge. I had a great ride at NetApp and I will cherish every moment I spent there, learning a lot which helped me grow both personally and professionally.

I think it is important to take on new challenges and adapt to changes because nothing ever stays the same, which means unless you can adapt and change too, you will be stuck doing the same things, which could then make your life a lot harder than it needs to be.

So, what makes Tintri so special?

Continue reading