Tintri, Hyper-V, Veeam and the Reddit Thread

Update – Here are the links to the youtube videos showing Hyper-V backup using both Commvault and Veeam.

Commvault

Veeam

Original Post

Few weeks back there was a comment posted on Reddit by a Tintri customer regarding Tintri, Hyper-V and Veeam.

https://www.reddit.com/r/sysadmin/comments/41uje6/tintri_veeam_hyperv_scvmm_and_woe/

The comment went something like this –

For all the love they show each other about compatability, Hyper-V is not supported by Veeam on Tintri VM Stores. This is due to Tintri not having a hardware VSS provider. Lots of finger pointing to find the answer.

Also, Tintri’s SMB3 implementation has a few major gaps. Folder inheritence isn’t a thing right now (slated for bugfix in 4.2), which means if you add hosts to a cluster, you have to fix permissions on all existing subfolders.

On top of all that, SCVMM can create shares using SMI-S, but cannot delete them. You have to delete the share on the VM Store and then rescan the storage provider.

Edit: I forgot to mention their powershell modules have no consistency between them. One uses Verb-TTNoun, the other uses Verb-TintriNoun.

There were a lot of comments and sub-comments on the thread. Some were accurate and others were not. We tried to post a response to the thread but because of Reddit’s strict policies, none of the comments ever showed up. As a result, the thread has caused some confusion amongst some of our customers and prospects.

So I wanted to take this opportunity to respond to the thread, clarify a few things and update everyone on the current status.

  • Backing up using a backup application through Remote VSS

The Tintri VMstore’s Microsoft VSS integration did lack some functionality needed for remote VSS based backups for Hyper-V (for using backup apps such as Veeam, Commvault). It will be fully supported in an upcoming release that is in QA and will be available to customers soon.

  • Folder inheritance over SMB

Folder inheritance over SMB has been supported since Tintri OS 3.1.1 in the context of storage for virtual machines. There is a specific case where this wasn’t handled correctly and as the customer pointed out, is rectified in the same upcoming Tintri OS update as above.

  • Removing SMB share using SCVMM

There is an issue removing SMB shares via SCVMM (SMI-S) and that has also been fixed in the update mentioned above. As the customer pointed out, it is still possible to remove the SMB share through the VMstore UI.

  • Inconsistency in the naming of Powershell cmdlets

To clarify, there is no inconsistency here and this is as per design. The Verb-TintriNoun PowerShell cmdlets represent the Tintri Automation Toolkit (free download from our support site) and is used for customer automation around Tintri products.

The Verb-TTNoun cmdlets are a collection of cmdlets for validating certain Microsoft environmental factors, not specific to Tintri, that can impact Hyper-V functionality. The latter is primarily used by Tintri field and support technicians and some automation. The separate ‘TT’ namespace is to avoid confusion or overlap with other modules.

As always, Tintri is committed to its multi-hypervisor story including Hyper-V and we have several large customers who have successfully deployed Tintri with Hyper-V and are enjoying the benefits of Tintri’s VM-Aware functionality in their environments. We apologize for the inconvenience caused to our customers because of this. We have ensured that all of these issues are ironed out as part of the upcoming Tintri OS release.

PS: Although the customer didn’t mention anything about his company, we believe the customer contacted support and received an update directly from the support team

@storarch

 

Some thoughts on NetApp’s acquisition of Solidfire

Yesterday, NetApp announced that they have entered into a definitive agreement to acquire Solidfire for $875M in an all cash transition. Having spent more than 7 years at NetApp, I thought, I would provide my perspective on the deal .

As we all know, NetApp had a 3-way strategy around Flash. First, All-Flash FAS for customers looking to get the benefits of data-management feature rich ONTAP but with better performance and at a lower latency. Second, E-Series for customers looking to use applications side features with a platform that delivered raw performance and third FlashRay for customers looking for something designed from the grounds up for flash that can utilize the denser, cheaper flash media to deliver lower cost alternative with inline space efficiency and data management features.

The Solidfire acquisition is the replacement for the FlashRay portion of the strategy. The FlashRay team took forever to get a product out of the door and then surprisingly couldn’t even deliver on HA. The failure to deliver on FlashRay is definitely alarming as NetApp had some quality engineers working on it. Solidfire gives NetApp faster time (?) to market (relatively speaking). Here is why I think Solidfire made the most sense for NetApp –

  • Solidfire gives NetApp arguably a highly scalable block based product (at least on paper). Solidfire’s Fiber Channel approach is a little funky but let’s ignore it for now.
  • Solidfire is one of the vendors out there that has native integration with cloud which plays well with NetApp’s Data Fabric vision.
  • Solidfire is only the second flash product out there designed from the grounds-up that can do QoS. I am not a fan as you can read here but they are better than the pack. (You know which is the other one – Tintri hybrid and all-flash VMstores with a more granular per-VM QoS of course)
  • Altavault gives NetApp a unified strategy to backup all NetApp products. So the All-Flash no longer has to work with SnapVault or ONTAP functionalities. Although the field teams would like to see tighter integration with SnapManager etc. Since most of the modern products make good use of APIs, it should not be difficult. (One of the key reasons why NetApp wanted to develop an all-flash product internally was that they wanted it to work with ONTAP – You are not surprised. Are you?)
  • Solidfire has a better story than some of the other traditional all-flash vendors out there around Service Providers which is a big focus within NetApp.
  • Solidfire’s openness around using Element OS with any HW and not just Dell and Cisco (that they can use today). I want to add here that from what I have gathered, Solidfire has more control over what type of HW one can use and its not as open as some of the other solutions out there.
  • And yes, Solidfire would have been much cheaper than other more established alternatives out there making the deal sweeter.

I would not go into where Solidfire as a product misses the mark. You can find those details around the internet. Look here and here.

Keeping technology aside, one of the big challenges for NetApp would be execution at the field level. The NetApp field sales team always leads with ONTAP and optimization of ONTAP for all-flash would make it difficult for the Solidfire product to gain mindshare unless there is a specific strategy put in place by leadership to change this behavior. Solidfire would be going from having sales team that woke up everyday to sell and create opportunity for the product to a team that historically hasn’t sold anything other than ONTAP. Hopefully, NetApp can get around this and execute on the field. At least that’s what Solidfire employees would be hoping for.

What’s next for NetApp? I can’t remember but I think someone on twitter or a blog or podcast mentioned that NetApp may go private in the coming year(s). Although it sounds crazy but I think its the only way for companies like NetApp/EMC to restructure and remove the pressure of delivering on the top line growth especially with falling storage costs, improvement in compute hardware, move towards more software centric sales, utility based pricing model and cloud.

From a Tintri standpoint, the acquisition doesn’t change anything. We believe that flash is just a medium and products like Solidfire, Pure Storage, XtremeIO or any product that uses LUNs and Volumes as the abstraction layer have failed to grab an opportunity to bring a change of approach for handling modern workloads in the datacenter. LUNs and Volumes were designed specifically for physical workloads and we have made them to work with virtual workloads through overprovisioning and constant baby-sitting. Flash just throws a lot of performance at the problem and contributes to overprovisioning. Whether customers deploy a Solidfire or a Pure Storage or a XtremeIO, there will be no change. It would just delay the inevitable. So pick your widget based on the incumbent in your datacenter or based on price.

If you want to fix the problem, remove the pain of constantly managing & reshuffling storage resources and make storage invisible then talk to Tintri.

Contact us and we will prove that we will drive down CAPEX (up to 5x), OPEX (up to 52x) and save you time with the VM-aware storage.

Screen Shot 2015-12-22 at 11.41.09 AM

While you are at it don’t forget to check out our Simplest Storage Guarantee here .

Screen Shot 2015-12-22 at 11.41.22 AM

Cheers..

@storarch

 

 

Choosing analytics: built-in, on-premises, or cloud-based

With the announcement of Tintri Analytics, we delivered on our vision: providing comprehensive, application-centric real-time analytics (using fully integrated on-prem and cloud-based solutions) that provide predictive, actionable insights based on historical system data (of up to 3 years).

Customers can now automatically (or manually) group VMs based on applications to analyze application profiles, run what-if scenarios, and model workload growth in terms of performance, capacity and flash working sets.Analytics

When you consider a storage solution refresh, analytics probably tops your list of needed features. It simplifies IT’s job, makes IT more productive and helps organizations save time and money.

The question is, what type of analytics should an organization look to have—built-in, on-premises or cloud-based? If you are just getting started, any sort of analytics would be great! Most storage vendors have an on-premises and/or a cloud-based solution. But an ideal storage product should have all three, as each of them has its own irreplaceable use case. Let’s take a look at each one.

Built-in analytics for auto-tuning

Built-in analytics that the system uses for self-tuning are uncommon in the industry. Tintri’s unique auto-QoS capability is a great example that uses built-in analytics, available at a vDisk level, to logically divide up all the storage resources and allocate the right amount of shares of the right type of resource (flash, CPU, network buffers etc.) to each vDisk. By doing this, a Tintri VMstore ensures that each vDisk is isolated from the other, without noisy neighbors.

Operationally, this simplifies architecture, as the IT team doesn’t have to figure out the number of its LUNs/volumes, the size of its LUNs/volumes, which workloads would work well together and so on. It can focus on just adding VMs to a datastore/storage repository as long as it has capacity and performance headroom available (as shown by the Tintri dashboard).

On-premises real-time analytics

On-prem analytics are extracted from a storage system by a built-in or external application deployed within the environment. Admins can consult these real-time analytics to help troubleshoot a live problem or store them for historical information. Admins can further use these analytics to help their storage solution deliver a prescriptive approach to placing workloads, and provide short-term historical data for trending, reporting and chargeback.

Tintri VMstore takes advantage of its built-in analytics to deliver an on-prem solution for analytics through both the VMstore GUI and Tintri Global Center. Up to a month of history can be imported into software like vRealize Operations, Nagios, Solarwinds and more.

Of course, customers don’t have to wait before they can see these analytics—unlike with cloud-based analytics, they can monitor systems in real-time.

Cloud-based predictive analytics

Cloud-based analytics help customers with long-term trending, what-if scenarios, predictive and comparative analytics. But not all cloud-based analytics are created equal. Some just show the metrics, while others let you trend storage capacity and performance. But the majority of them can’t go application-granular across multiple hypervisors, especially in a virtual environment. They’re just statistical guesswork based on LUN/volume data.

And that’s where Tintri Analytics separate themselves from the pack. With a VM-Aware approach, we understand applications, group them automatically and provide great insights across customers data.

Your IT team wants to be proactive, working on solving business problems instead of doing day-to-day mundane tasks. That’s why each of these three categories of analytics are must-haves. With Tintri Analytics, Tintri’s committed to reducing the pressure on storage and system admins, and helping to grow, not stall, your organization.

Cheers..

@storarch

What’s New in All-Flash?

Today, Tintri announced the Tintri VMstore T5000 All-Flash series—the world’s first all-flash storage system that lets you work at the VM level—leading a launch that includes Tintri OS 4.0, Tintri Global Center 2.1 and VMstack, Tintri’s partner-led converged stack. Since its inception in 2008, Tintri has delivered differentiated and innovative features and products for next-generation virtualized datacenters. And we’re continuing the trend with the game-changing All-Flash VM-Aware Storage (VAS).

Other all-flash vendors claim all-flash can be a solution for all workloads—a case of “if all you have is a hammer then everything looks like a nail.” Or, they’ll argue that all-flash can augment hybrid deployments, with the ability to pin or move entire LUNs and volumes.

launchtimeline_tintri

But not all workloads in a LUN or volume may have the same needs for flash, performance and latency. So just as we’ve reinvented storage over the past four years, Tintri’s ready to reinvent all-flash. Here’s how:

  • No LUNs. Continuing the Tintri tradition, the T5000 series eliminates LUNs and volumes, letting you focus on applications. We’re welcoming VMs to the all-flash space across multiple hypervisors.
  • Unified management. Aside from standalone installations, the T5000 series can also augment the T800, and vice-versa. Admins can now manage VMs across hybrid-flash and all-flash platforms in a single pool through Tintri Global Center (TGC), with full integration.
  • Fully automated policy-based infrastructure through TGC, with support from vDisk-granular analytics and VM-granular self-managed service groups.

With access to vDisk-granular historical performance data, SLAs and detailed latency information, customers can decide which workloads can benefit from all-flash vs hybrid-flash—especially when our hybrid-flash delivers 99-100% from flash.

But we hear you, storage admins: you want to go into the weeds. Surprise—we’re happy to help. Here’s what else the T5000 series can offer you:

  • Space savings from inline dedupe, compression, cloning and thin provisioning.
  • NVDIMMs, NTB, 10G and more of the latest hardware advancements.
  • Enterprise reliability exceeding 99.999% uptime.
  • Scale of up to 112,000 VMs, 2.3PB and up to 5.4M IOPs (random 60:40 R:W, 8K) in a single TGC implementation.  (These are real-life numbers, not 100% read numbers.)
  • VM-granular snapshots, cloning and replication.
  • vDisk-level Dynamic QoS to eliminate noisy neighbors and ensure peak performance.
  • VM-level Manual QoS to setup performance SLAs through Min and Max IOPs.
  • vDisk (VMDKs, VHDs)-level data synchronization across VMs for test and dev or any operations requiring periodic copying of data.
  • VM-level replication, backup and transfer between Hybrid Flash and All-Flash systems.
  • VM-granular performance analytics with end-to-end latency visualization that includes host, network, storage, contention and throttle latency.

Today, Tintri continues our solid roadmap of business-relevant innovations in storage for modern workloads. We changed the game for hybrid-flash—and we’re doing it again for all-flash.

Cheers,
@storarch

The industry is validating Tintri – Another one comes through

Last few weeks have been great in terms of industry recognition of how Tintri has been approaching the storage problem for virtualized workloads.

First, VVOLs go GA and validates the approach Tintri took 7 years back with VMstore in terms of removing the boundaries around LUNs and Volumes in virtualized environments and come out with a product that delivered VM centric Storage Platform around 4 years back. The result is 4 years of product maturity (and 4 years of lead) based on real world deployments.

Now we have Pure Storage announce an integration with VMTurbo that allows customers to use VMTurbo in combination with Pure Storage to automate the movement of VMs from one LUN to the other based on various conditions including performance and latency.

What does this tell us?

Continue reading

FY16 , VVols and Tintri’s Financial Differentiation

vSphere 6.0 is GA and Tintri announced support for vSphere 6.0, VVols and VMware Integrated Openstack along with a plugin for vRealize Operations (vROPs) at PEX this week. We also finished our FY in January and will be off to our Sales Kick Off next week. FY15 was a great year with tremendous growth and record QTRs for Tintri. Tintri continues to lead the way with a product designed from the grounds up for both Flash and Virtualization. With vSphere 6.0 we would bring all the goodness that customer’s love about VMstore to VVols including some of the key differentiators that would separate us from the pack –

  • 99-100% IO from flash
  • VM granular operations
  • VM granular visibility and latency visualization
  • Per VM/VVol analytics
  • Automatics Per VM partitioning of Storage resources (Performance Reserves) based on the analytics
  • QoS and Performance Fair share at a VM level (there are a lot more exciting things coming in this space. Stay Tuned for an update on this….)
  • Latency Visualization across the infrastructure (Host, Network, Storage). We are going to add more to this in the coming weeks … stay tuned
  • 1M VVols per VMstore
    • With VVols implementation, a VM may need as low as 5 VVols and as high as 100s of VVols (with snapshot, clones etc.) and a 1000 VM install would require 10s of Thousands to 100s of thousands VVols.
  • VM Granular Automation
  • Ability to Manage, Monitor and analyze up to 112,000 VMs from a single pane of glass

The VVol race that the storage Vendors are starting now was won by Tintri four years back. If you would be evaluating VVols in the coming weeks/months, you should definitely read this blog here to understand what you should ask from your storage vendor when it comes to VVols.

Tintri from the beginning focused on using software to drive innovation and one of the key differentiators about the technology is its ability to deliver 99-100% IO from flash, which is driven by our software unlike all-flash vendors that use Brute Force to deliver performance. The advantage is that Tintri can address a broad spectrum of workloads at a very low cost.

Workload Breakup

What it means is that unlike an All-Flash solution where $/IOP is low and $/GB is high or a hybrid solution where $/IOP is high and $/GB is low, Tintri brings both $/GB and $/IOP at low levels without over depending on Space Savings (Dedup & Compression), therefore delivering a better $/workload at a very high density.

$/GB

Our focus on Virtualization continues to help us differentiate and bring new Virtualization & Cloud Centric functionality faster to the market. The result is a platform that is 5-10x cheaper on CAPEX, 60x cheaper on OPEX and highly scalable.

Cheers..

@storarch

Simplifying Storage Chargeback/Showback with Tintri – Part 3 (Performance)

In the Part1 of this series, I covered the challenges around Storage Chargeback and Showback in Virtualized environments running on traditional storage platforms that use LUN/Volume Abstraction Layers and in Part2, I covered how Tintri simplifies and brings more accuracy to the Capacity Centric Model with its VM centric design.

In this post I will cover how Tintri makes it easy to incorporate a more accurate Performance based model into Chargeback/Showback that can be used in combination with the Capacity Centric Model.

As I stated in my first post – although everyone would like to add a performance centric model to Chargeback/Showback, it is not that popular given the complexities around implementing something like that on the storage side in a virtualized environment (refer my first post in the series).

As we all know, one of the big advantages (and differentiator) that Tintri has is that its abstraction layer is a vDisk (VMDK, VHD etc.) and not a LUN or a Volume. This allows it to see at the right level of abstraction (rather than looking at a LUN or a Volume). The other advantage that it has is that because of its tight API integration with various hypervisors, it sees a vDisk as a vDisk and not as another file. This ability allows it to create an IO Profile of every vDisk in the system. This IO profile gives Tintri an understanding of the type of IO taking place in side a vDisk (Random, Sequential, %Reads, %Writes, Block Size) based on which it assigns ‘Performance Reserves‘ to every vDisk. The Performance Reserves are a combination various resources in the storage platform – like Flash, CPU shares, Network buffers etc. and are shown by the Tintri Management interface in % values.

So if we consider a Database (DB) VM with C: Drive, D: Drive (DB files) and E: Drive (Logs), Tintri would look at those individual drives, learn about the IO profiles (say D: is random IO, E: is sequential and C: not much IO) and then automatically assigns Performance Reserves to these with a goal to deliver 99-10% IO from flash and sub-ms response times. The first benefit (as I have discussed in my other posts) of this approach is that Tintri automatically tunes itself unlike some other storage products that just show some values (most of them at a LUN/Volume level) and require the admin to take action – like add/buy more flash, reshuffle the VMs, reduce CPU/cache load etc. The second benefit, the one that we will discuss here in more detail, is that IT/Service Providers now have a metric against which they can do a Chargeback/Showback.

Performance Reserves are independent of IOPs and are a measure of how much Storage resources are consumed (or will be consumed) by the vDisk in order to get 99-100% IO from flash and sub-ms response times. Traditional Performance centric models that use IOPs as the measure don’t take into consideration the amount of Storage resources consumed by the vDisk to do for a particular type/size of IO.

The problem here is the traditional underlying storage architecture that continues to use LUNs/Volumes as the abstraction layer. These Storage products in some cases don’t have an ability to provide a Quality of Service to even those abstraction layers, let alone looking at individual vDisks and automatically tuning various storage resources for a workload.

Here we look at a simple example of how three different VMs having different type of IO characteristics get their Performance Reserves assigned by Tintri.

The VM ss_testld_1 in the first screenshot is doing 2,751 IOPs (Ave. 8K block size) with 90% Reads and has 5.2% Performance Reserves allocated to it.

Screenshot-1 - VM ss_testld_1

The VM ss_testld_2 in the second screenshot is doing 1,112 IOPs (Ave. 81.6K block size) with 90% Reads and has 10.8% Performance Reserves allocated to it.

Screenshot-2 - VM ss_testld_2

The VM ss_testld_3 in the third screenshot is doing 1,458 IOPs (Ave. 8K block size) with 90% Writes and has 4.6% Performance Reserves allocated to it.

Screenshot-3 - VM ss_testld_3

So what we do we see here?

The VM ss_testld_1 is doing 90% reads just like the VM ss_testld_2 and is doing almost 2.5x IOPs (2,751) than the VM ss_testld_2 (1,112) but still has lower Performance Reserves/Footprint (5.2% Vs 10.8%) because it has a much smaller block size (Ave. 8K Vs Ave. 81.6K)

So, if we look at just IOPs, the cost of running ss_testld_1 seems higher but in reality the cost of running ss_testld_2 is more than double that of ss_testld_1.

In the same way the VM ss_testld_1 is doing almost double the IOPs (2,751) than the VM ss_testld_3 (1,458) with the same block size (Ave. 8K) but they are using almost similar performance reserves 5.2% and 4.6%) because the latter is heavy on write (90% Writes).

Here if we look at just the IOPs, the cost of running ss_testld_1 again seems higher but in reality the cost of running ss_testld_3 is almost the same as ss_testld_1 even with ss_testld_3 doing half the IOPs.

Taking a very simplistic example for Chargeback/Showback, if we consider the cost of 100% Reserves at say $100,000 based on cost to acquire/install/support Tintri

  • ss_testld_1 would cost around $5.2K to run
  • ss_testld_2 would cost around $10.8K to run
  • ss_testld_3 would cost around $4.6K to run

If we had taken a IOPs centric model, the costs would have been completely different with no relation to the amount of storage resources consumed to run a particular type of workload.

The other big challenge with the IOPs centric model is its unpredictability. Let’s say we sized a storage platform for X IOPs (say 8K) with some read:write, rand:seq mix and then priced per IOP accordingly. What if the platform doesn’t deliver those IOPs (say it delivers Y IOPs) because our assumptions were wrong or because of completely unpredictable workloads. The difference (X-Y) now has to be absorbed somewhere. So to cover that, IT/Service Provider would charge more next time and become uncompetitive.

The Performance Reserve metric is independent of the IOPs and can be combined with a Capacity Centric Model (for capacity hungry VMs) to give a more accurate Chargeback/Showback model.

The cool thing about all of Tintri’s metrics is that they are all exposed through our REST APIs and Powershell Integration. Therefore these can plug into any customized model as well, giving a more predictable, simplified and accurate Chargeback/Showback Model.

Thanks for reading…

Cheers.

@storarch