Some thoughts on NetApp’s acquisition of Solidfire

Yesterday, NetApp announced that they have entered into a definitive agreement to acquire Solidfire for $875M in an all cash transition. Having spent more than 7 years at NetApp, I thought, I would provide my perspective on the deal .

As we all know, NetApp had a 3-way strategy around Flash. First, All-Flash FAS for customers looking to get the benefits of data-management feature rich ONTAP but with better performance and at a lower latency. Second, E-Series for customers looking to use applications side features with a platform that delivered raw performance and third FlashRay for customers looking for something designed from the grounds up for flash that can utilize the denser, cheaper flash media to deliver lower cost alternative with inline space efficiency and data management features.

The Solidfire acquisition is the replacement for the FlashRay portion of the strategy. The FlashRay team took forever to get a product out of the door and then surprisingly couldn’t even deliver on HA. The failure to deliver on FlashRay is definitely alarming as NetApp had some quality engineers working on it. Solidfire gives NetApp faster time (?) to market (relatively speaking). Here is why I think Solidfire made the most sense for NetApp –

  • Solidfire gives NetApp arguably a highly scalable block based product (at least on paper). Solidfire’s Fiber Channel approach is a little funky but let’s ignore it for now.
  • Solidfire is one of the vendors out there that has native integration with cloud which plays well with NetApp’s Data Fabric vision.
  • Solidfire is only the second flash product out there designed from the grounds-up that can do QoS. I am not a fan as you can read here but they are better than the pack. (You know which is the other one – Tintri hybrid and all-flash VMstores with a more granular per-VM QoS of course)
  • Altavault gives NetApp a unified strategy to backup all NetApp products. So the All-Flash no longer has to work with SnapVault or ONTAP functionalities. Although the field teams would like to see tighter integration with SnapManager etc. Since most of the modern products make good use of APIs, it should not be difficult. (One of the key reasons why NetApp wanted to develop an all-flash product internally was that they wanted it to work with ONTAP – You are not surprised. Are you?)
  • Solidfire has a better story than some of the other traditional all-flash vendors out there around Service Providers which is a big focus within NetApp.
  • Solidfire’s openness around using Element OS with any HW and not just Dell and Cisco (that they can use today). I want to add here that from what I have gathered, Solidfire has more control over what type of HW one can use and its not as open as some of the other solutions out there.
  • And yes, Solidfire would have been much cheaper than other more established alternatives out there making the deal sweeter.

I would not go into where Solidfire as a product misses the mark. You can find those details around the internet. Look here and here.

Keeping technology aside, one of the big challenges for NetApp would be execution at the field level. The NetApp field sales team always leads with ONTAP and optimization of ONTAP for all-flash would make it difficult for the Solidfire product to gain mindshare unless there is a specific strategy put in place by leadership to change this behavior. Solidfire would be going from having sales team that woke up everyday to sell and create opportunity for the product to a team that historically hasn’t sold anything other than ONTAP. Hopefully, NetApp can get around this and execute on the field. At least that’s what Solidfire employees would be hoping for.

What’s next for NetApp? I can’t remember but I think someone on twitter or a blog or podcast mentioned that NetApp may go private in the coming year(s). Although it sounds crazy but I think its the only way for companies like NetApp/EMC to restructure and remove the pressure of delivering on the top line growth especially with falling storage costs, improvement in compute hardware, move towards more software centric sales, utility based pricing model and cloud.

From a Tintri standpoint, the acquisition doesn’t change anything. We believe that flash is just a medium and products like Solidfire, Pure Storage, XtremeIO or any product that uses LUNs and Volumes as the abstraction layer have failed to grab an opportunity to bring a change of approach for handling modern workloads in the datacenter. LUNs and Volumes were designed specifically for physical workloads and we have made them to work with virtual workloads through overprovisioning and constant baby-sitting. Flash just throws a lot of performance at the problem and contributes to overprovisioning. Whether customers deploy a Solidfire or a Pure Storage or a XtremeIO, there will be no change. It would just delay the inevitable. So pick your widget based on the incumbent in your datacenter or based on price.

If you want to fix the problem, remove the pain of constantly managing & reshuffling storage resources and make storage invisible then talk to Tintri.

Contact us and we will prove that we will drive down CAPEX (up to 5x), OPEX (up to 52x) and save you time with the VM-aware storage.

Screen Shot 2015-12-22 at 11.41.09 AM

While you are at it don’t forget to check out our Simplest Storage Guarantee here .

Screen Shot 2015-12-22 at 11.41.22 AM

Cheers..

@storarch

 

 

Choosing analytics: built-in, on-premises, or cloud-based

With the announcement of Tintri Analytics, we delivered on our vision: providing comprehensive, application-centric real-time analytics (using fully integrated on-prem and cloud-based solutions) that provide predictive, actionable insights based on historical system data (of up to 3 years).

Customers can now automatically (or manually) group VMs based on applications to analyze application profiles, run what-if scenarios, and model workload growth in terms of performance, capacity and flash working sets.Analytics

When you consider a storage solution refresh, analytics probably tops your list of needed features. It simplifies IT’s job, makes IT more productive and helps organizations save time and money.

The question is, what type of analytics should an organization look to have—built-in, on-premises or cloud-based? If you are just getting started, any sort of analytics would be great! Most storage vendors have an on-premises and/or a cloud-based solution. But an ideal storage product should have all three, as each of them has its own irreplaceable use case. Let’s take a look at each one.

Built-in analytics for auto-tuning

Built-in analytics that the system uses for self-tuning are uncommon in the industry. Tintri’s unique auto-QoS capability is a great example that uses built-in analytics, available at a vDisk level, to logically divide up all the storage resources and allocate the right amount of shares of the right type of resource (flash, CPU, network buffers etc.) to each vDisk. By doing this, a Tintri VMstore ensures that each vDisk is isolated from the other, without noisy neighbors.

Operationally, this simplifies architecture, as the IT team doesn’t have to figure out the number of its LUNs/volumes, the size of its LUNs/volumes, which workloads would work well together and so on. It can focus on just adding VMs to a datastore/storage repository as long as it has capacity and performance headroom available (as shown by the Tintri dashboard).

On-premises real-time analytics

On-prem analytics are extracted from a storage system by a built-in or external application deployed within the environment. Admins can consult these real-time analytics to help troubleshoot a live problem or store them for historical information. Admins can further use these analytics to help their storage solution deliver a prescriptive approach to placing workloads, and provide short-term historical data for trending, reporting and chargeback.

Tintri VMstore takes advantage of its built-in analytics to deliver an on-prem solution for analytics through both the VMstore GUI and Tintri Global Center. Up to a month of history can be imported into software like vRealize Operations, Nagios, Solarwinds and more.

Of course, customers don’t have to wait before they can see these analytics—unlike with cloud-based analytics, they can monitor systems in real-time.

Cloud-based predictive analytics

Cloud-based analytics help customers with long-term trending, what-if scenarios, predictive and comparative analytics. But not all cloud-based analytics are created equal. Some just show the metrics, while others let you trend storage capacity and performance. But the majority of them can’t go application-granular across multiple hypervisors, especially in a virtual environment. They’re just statistical guesswork based on LUN/volume data.

And that’s where Tintri Analytics separate themselves from the pack. With a VM-Aware approach, we understand applications, group them automatically and provide great insights across customers data.

Your IT team wants to be proactive, working on solving business problems instead of doing day-to-day mundane tasks. That’s why each of these three categories of analytics are must-haves. With Tintri Analytics, Tintri’s committed to reducing the pressure on storage and system admins, and helping to grow, not stall, your organization.

Cheers..

@storarch

Teleport your Storage with NetApp vFilers

In the last Blog I talked about how NetApp along its journey, has contributed to Software Defined Storage. One of the functionalities I talked about was vFilers. vFilers are logical constructs that abstract the Data Management from the hardware making them Secure and Portable. One of the cool functionalities that vFilers have is the ability to failover to another Storage controller on the same or another site.

So What?

One of the big challenges around DR is that, when executing a DR plan one has to execute multiple steps. Eg: Bringing individual volumes/LUNs out of Read Only Mode on the DR site, changing network settings, doing server specific tasks etc. Most of the times these are all done manually. It gets even more complicated if you have File Shares hosted. With file shares, one has to additionally create shares, users, policies on the other site or keep them updated all the time, which could be very cumbersome. With a planned failback or a planned failover, one has to also make sure that individual volumes are updated up to the last minute. This can not only turn out to be a long process but error-prone too affecting the Recovery Time Objective or the recovery itself. Thus affecting the business.

vFilers to the Rescue

A vFiler contains Volumes, LUNs, Logical IPs, Shares, Users, Groups and Policies to name a few. Think of it is as a storage by itself where Data ONTAP acts like a Storage Hypervisor and the vFilers as the storage VMs. One of the unique features of the vFilers is vFiler DR. A vFiler DR helps to failover a vFiler from one Storage to the other with a single click. Think of it as teleportation where the vFiler disappears from one site and appears on the other site with all its contents, properties and characteristics.

The way it works is that when you setup a vFiler DR initially, it creates all the volumes, shares, users etc. on the destination and keeps them in sync through SnapMirror (NetApp’s Replication Software). One can setup the sync interval at individual volume level based on the Recovery Point Objective (RPO). It also allows one to specify any changes in IP addresses, DNS, NIS and AD that you want it to make when failing over to the DR site.  Once setup, you end up having two vFilers – one at the source in Online state and the other at the destination in Offline state. It monitors any changes on the source and syncs them. It also alerts if any new volumes have been added to the source and either let’s you create it or creates them on its own with the necessary sync interval.

At the time of an unplanned outage, all the user has to do is activate the vFiler on the DR site which does the necessary changes to the vFiler (IP address, DNS, AD etc.) on the DR site and brings it online up to the last synced state. For planed failover or failback, the user has an option to initiate resync of the vFiler (including volumes/LUNs, users, shares and other configuration) before activating the vFiler on the destination. Now, compare it to any traditional deployment where one has to update all the relationships manually.

You may think, how great it would be to make this an online process where none of the applications have to be taken down. Why Not? We can do that too provided you can foresee a disaster and meet the distance limitations. Here is a demo that I shared in my previous blog that demonstrates exactly this.

Looking at the current trends, this has a big use case in Hybrid Cloud environments apart from Public and Private Clouds (and the traditional use cases).

Who said Teleportation is difficult? We made it possible in the storage world a long time back.  No one had even thought about the possibilities of a Hybrid Cloud back then.

Wait a minute … No one?

Cheers..

Satinder (@storarch)

Edit: Removed one of the demo videos where I used a beta functionality which didn’t make it to GA. The blog’s content is still applicable.

Back to the Future with Software Defined Storage

After every few years our industry comes up with a new buzzword that takes control over Vendor Collateral, Webpages, Blogs, Tweets etc. We had Virtualization, then we had Cloud and now it is Software Defined Datacenter. From NetApp’s perspective none of this was new. NetApp started Virtualization at the storage/data block level way back in 1992. A lot of major innovations came in 2000s when technologies like V-Series, FlexVols, RAID-DP, Thin Provisioning, Writeable Clones, Deduplication, File/Sparse File level clones and FlashCache were brought to the market by NetApp (To see a summary of innovations by NetApp see this page here). When VMware started to grow rapidly, NetApp followed suit as it was ahead of the curve and ready for growth in Virtual Environments.

Continue reading