Some thoughts on NetApp’s acquisition of Solidfire

Yesterday, NetApp announced that they have entered into a definitive agreement to acquire Solidfire for $875M in an all cash transition. Having spent more than 7 years at NetApp, I thought, I would provide my perspective on the deal .

As we all know, NetApp had a 3-way strategy around Flash. First, All-Flash FAS for customers looking to get the benefits of data-management feature rich ONTAP but with better performance and at a lower latency. Second, E-Series for customers looking to use applications side features with a platform that delivered raw performance and third FlashRay for customers looking for something designed from the grounds up for flash that can utilize the denser, cheaper flash media to deliver lower cost alternative with inline space efficiency and data management features.

The Solidfire acquisition is the replacement for the FlashRay portion of the strategy. The FlashRay team took forever to get a product out of the door and then surprisingly couldn’t even deliver on HA. The failure to deliver on FlashRay is definitely alarming as NetApp had some quality engineers working on it. Solidfire gives NetApp faster time (?) to market (relatively speaking). Here is why I think Solidfire made the most sense for NetApp –

  • Solidfire gives NetApp arguably a highly scalable block based product (at least on paper). Solidfire’s Fiber Channel approach is a little funky but let’s ignore it for now.
  • Solidfire is one of the vendors out there that has native integration with cloud which plays well with NetApp’s Data Fabric vision.
  • Solidfire is only the second flash product out there designed from the grounds-up that can do QoS. I am not a fan as you can read here but they are better than the pack. (You know which is the other one – Tintri hybrid and all-flash VMstores with a more granular per-VM QoS of course)
  • Altavault gives NetApp a unified strategy to backup all NetApp products. So the All-Flash no longer has to work with SnapVault or ONTAP functionalities. Although the field teams would like to see tighter integration with SnapManager etc. Since most of the modern products make good use of APIs, it should not be difficult. (One of the key reasons why NetApp wanted to develop an all-flash product internally was that they wanted it to work with ONTAP – You are not surprised. Are you?)
  • Solidfire has a better story than some of the other traditional all-flash vendors out there around Service Providers which is a big focus within NetApp.
  • Solidfire’s openness around using Element OS with any HW and not just Dell and Cisco (that they can use today). I want to add here that from what I have gathered, Solidfire has more control over what type of HW one can use and its not as open as some of the other solutions out there.
  • And yes, Solidfire would have been much cheaper than other more established alternatives out there making the deal sweeter.

I would not go into where Solidfire as a product misses the mark. You can find those details around the internet. Look here and here.

Keeping technology aside, one of the big challenges for NetApp would be execution at the field level. The NetApp field sales team always leads with ONTAP and optimization of ONTAP for all-flash would make it difficult for the Solidfire product to gain mindshare unless there is a specific strategy put in place by leadership to change this behavior. Solidfire would be going from having sales team that woke up everyday to sell and create opportunity for the product to a team that historically hasn’t sold anything other than ONTAP. Hopefully, NetApp can get around this and execute on the field. At least that’s what Solidfire employees would be hoping for.

What’s next for NetApp? I can’t remember but I think someone on twitter or a blog or podcast mentioned that NetApp may go private in the coming year(s). Although it sounds crazy but I think its the only way for companies like NetApp/EMC to restructure and remove the pressure of delivering on the top line growth especially with falling storage costs, improvement in compute hardware, move towards more software centric sales, utility based pricing model and cloud.

From a Tintri standpoint, the acquisition doesn’t change anything. We believe that flash is just a medium and products like Solidfire, Pure Storage, XtremeIO or any product that uses LUNs and Volumes as the abstraction layer have failed to grab an opportunity to bring a change of approach for handling modern workloads in the datacenter. LUNs and Volumes were designed specifically for physical workloads and we have made them to work with virtual workloads through overprovisioning and constant baby-sitting. Flash just throws a lot of performance at the problem and contributes to overprovisioning. Whether customers deploy a Solidfire or a Pure Storage or a XtremeIO, there will be no change. It would just delay the inevitable. So pick your widget based on the incumbent in your datacenter or based on price.

If you want to fix the problem, remove the pain of constantly managing & reshuffling storage resources and make storage invisible then talk to Tintri.

Contact us and we will prove that we will drive down CAPEX (up to 5x), OPEX (up to 52x) and save you time with the VM-aware storage.

Screen Shot 2015-12-22 at 11.41.09 AM

While you are at it don’t forget to check out our Simplest Storage Guarantee here .

Screen Shot 2015-12-22 at 11.41.22 AM

Cheers..

@storarch

 

 

Whats new in Data ONTAP 8.2

Data ONTAP 8.2 RC1 has been posted to the NetApp support site. Being a Major Release after 8.1.x, it has a lot of new functionalities. Here is a quick list of features (not exhaustive) that have been added to Clustered Data ONTAP (cDOT) 8.2 –

The Potential of Server Side Cache

Flash has changed the way the Storage is Architected in the Datacenter. The fear of using higher capacity drives for high performance applications is the thing of the past. I remember having conversations with customers up for tech refresh and their concern around refreshing the hard drives supporting their applications. The use of Flash in different forms has taken care of those concerns. What we are striving for now is use of just two tiers in the datacenter – Flash (or equivalent) and SATA

Continue reading

An Innovative way to backup to tapes

Disk to Disk backups have become popular these days but if we exclude some exceptions, tapes are still the norm when it comes to long time retention and that is what we are going to talk about here. So here is a blog post on Tapes for a change.

How many times have you been in a situation to restore a backup that was more than a few months old and you could only restore the monthly backup? A request to restore a version of data that is more than a few months old is rare but when it comes, it is important and some times could have financial implications attached to it.

Continue reading

The best part of Clustered ONTAP – No Striped Volumes/LUNs

I had a great last week and the highlight of my week was a customer saying “This is too good to be true” after a Clustered ONTAP presentation and then the same thought crossing my mind during an ONTAP roadmap session. There are great things coming in not so distant future which I can’t share just as yet. You can surely expect NetApp to further raise the level of the newly created Scaleout Unified Storage Market just like it did with Unified Storage.

Coming back to what I wanted to discuss. One of the things that some customers/partners bring up about Clustered ONTAP is that if the NetApp LUNs/volumes are limited to a single node or if they are striped across the nodes.

Continue reading

New in Clustered ONTAP – Improved Single File Restore (Think quick VM level recoveries)

I met a relatively new NetApp customer a few weeks back for discussing the best practices around vSphere and NetApp. While going through the some of the stuff, he brought up a point around VSC Backup & Recovery (SMVI) that when he tried restoring a complete Datastore from a NetApp Snapshot backup, he found it to be much faster than restoring a VM from the same Snapshot. I explained to him that while the Datastore restore utilizes SnapRestore which simply reverts the pointers of the volume to a previous point in time resulting in near-instantaneous restores, the VM restore utilizes something called Single File SnapRestore (SFSR) which copies back the files from the snapshot copy to the Active File System.

So the time taken to restore a single VM is dependent upon the size of the VM. I also shared with him a great workaround to achieve instant restores of VM by mounting the backup through VSC, adding the VM in the mounted backup to the inventory, powering it on and using Storage vMotion to move it wherever one wants. My colleague Keith Aasen (also a fellow Canadian) has documented the process here https://communities.netapp.com/docs/DOC-10862

While the above process is great for instant restores, wouldn’t it be nice if the SFSR process itself was faster?

Why not?

Continue reading

The Dilemma of Evaluating Storage for VMware

As a Consulting Systems Engineer focused on Virtualization, I get to meet a lot of Customers and Prospects that are evaluating storage solution for Virtualization, Cloud, Business Critical Applications etc. A lot of times, with so many options available in the Storage Industry with confusingly similar messages, this is how a Technology Evaluator looks at things.

Is this how you see it too? Believe it or not, this is the situation that some vendors work towards!

As we know, the devil is in the details. When a Technical Evaluator finds himself in this situation, there are specific questions that he/she can ask each vendor to get a clearer picture. I have tried to put a list below without a specific order.

Continue reading

Teleport your Storage with NetApp vFilers

In the last Blog I talked about how NetApp along its journey, has contributed to Software Defined Storage. One of the functionalities I talked about was vFilers. vFilers are logical constructs that abstract the Data Management from the hardware making them Secure and Portable. One of the cool functionalities that vFilers have is the ability to failover to another Storage controller on the same or another site.

So What?

One of the big challenges around DR is that, when executing a DR plan one has to execute multiple steps. Eg: Bringing individual volumes/LUNs out of Read Only Mode on the DR site, changing network settings, doing server specific tasks etc. Most of the times these are all done manually. It gets even more complicated if you have File Shares hosted. With file shares, one has to additionally create shares, users, policies on the other site or keep them updated all the time, which could be very cumbersome. With a planned failback or a planned failover, one has to also make sure that individual volumes are updated up to the last minute. This can not only turn out to be a long process but error-prone too affecting the Recovery Time Objective or the recovery itself. Thus affecting the business.

vFilers to the Rescue

A vFiler contains Volumes, LUNs, Logical IPs, Shares, Users, Groups and Policies to name a few. Think of it is as a storage by itself where Data ONTAP acts like a Storage Hypervisor and the vFilers as the storage VMs. One of the unique features of the vFilers is vFiler DR. A vFiler DR helps to failover a vFiler from one Storage to the other with a single click. Think of it as teleportation where the vFiler disappears from one site and appears on the other site with all its contents, properties and characteristics.

The way it works is that when you setup a vFiler DR initially, it creates all the volumes, shares, users etc. on the destination and keeps them in sync through SnapMirror (NetApp’s Replication Software). One can setup the sync interval at individual volume level based on the Recovery Point Objective (RPO). It also allows one to specify any changes in IP addresses, DNS, NIS and AD that you want it to make when failing over to the DR site.  Once setup, you end up having two vFilers – one at the source in Online state and the other at the destination in Offline state. It monitors any changes on the source and syncs them. It also alerts if any new volumes have been added to the source and either let’s you create it or creates them on its own with the necessary sync interval.

At the time of an unplanned outage, all the user has to do is activate the vFiler on the DR site which does the necessary changes to the vFiler (IP address, DNS, AD etc.) on the DR site and brings it online up to the last synced state. For planed failover or failback, the user has an option to initiate resync of the vFiler (including volumes/LUNs, users, shares and other configuration) before activating the vFiler on the destination. Now, compare it to any traditional deployment where one has to update all the relationships manually.

You may think, how great it would be to make this an online process where none of the applications have to be taken down. Why Not? We can do that too provided you can foresee a disaster and meet the distance limitations. Here is a demo that I shared in my previous blog that demonstrates exactly this.

Looking at the current trends, this has a big use case in Hybrid Cloud environments apart from Public and Private Clouds (and the traditional use cases).

Who said Teleportation is difficult? We made it possible in the storage world a long time back.  No one had even thought about the possibilities of a Hybrid Cloud back then.

Wait a minute … No one?

Cheers..

Satinder (@storarch)

Edit: Removed one of the demo videos where I used a beta functionality which didn’t make it to GA. The blog’s content is still applicable.

Back to the Future with Software Defined Storage

After every few years our industry comes up with a new buzzword that takes control over Vendor Collateral, Webpages, Blogs, Tweets etc. We had Virtualization, then we had Cloud and now it is Software Defined Datacenter. From NetApp’s perspective none of this was new. NetApp started Virtualization at the storage/data block level way back in 1992. A lot of major innovations came in 2000s when technologies like V-Series, FlexVols, RAID-DP, Thin Provisioning, Writeable Clones, Deduplication, File/Sparse File level clones and FlashCache were brought to the market by NetApp (To see a summary of innovations by NetApp see this page here). When VMware started to grow rapidly, NetApp followed suit as it was ahead of the curve and ready for growth in Virtual Environments.

Continue reading