Some thoughts on NetApp’s acquisition of Solidfire

Yesterday, NetApp announced that they have entered into a definitive agreement to acquire Solidfire for $875M in an all cash transition. Having spent more than 7 years at NetApp, I thought, I would provide my perspective on the deal .

As we all know, NetApp had a 3-way strategy around Flash. First, All-Flash FAS for customers looking to get the benefits of data-management feature rich ONTAP but with better performance and at a lower latency. Second, E-Series for customers looking to use applications side features with a platform that delivered raw performance and third FlashRay for customers looking for something designed from the grounds up for flash that can utilize the denser, cheaper flash media to deliver lower cost alternative with inline space efficiency and data management features.

The Solidfire acquisition is the replacement for the FlashRay portion of the strategy. The FlashRay team took forever to get a product out of the door and then surprisingly couldn’t even deliver on HA. The failure to deliver on FlashRay is definitely alarming as NetApp had some quality engineers working on it. Solidfire gives NetApp faster time (?) to market (relatively speaking). Here is why I think Solidfire made the most sense for NetApp –

  • Solidfire gives NetApp arguably a highly scalable block based product (at least on paper). Solidfire’s Fiber Channel approach is a little funky but let’s ignore it for now.
  • Solidfire is one of the vendors out there that has native integration with cloud which plays well with NetApp’s Data Fabric vision.
  • Solidfire is only the second flash product out there designed from the grounds-up that can do QoS. I am not a fan as you can read here but they are better than the pack. (You know which is the other one – Tintri hybrid and all-flash VMstores with a more granular per-VM QoS of course)
  • Altavault gives NetApp a unified strategy to backup all NetApp products. So the All-Flash no longer has to work with SnapVault or ONTAP functionalities. Although the field teams would like to see tighter integration with SnapManager etc. Since most of the modern products make good use of APIs, it should not be difficult. (One of the key reasons why NetApp wanted to develop an all-flash product internally was that they wanted it to work with ONTAP – You are not surprised. Are you?)
  • Solidfire has a better story than some of the other traditional all-flash vendors out there around Service Providers which is a big focus within NetApp.
  • Solidfire’s openness around using Element OS with any HW and not just Dell and Cisco (that they can use today). I want to add here that from what I have gathered, Solidfire has more control over what type of HW one can use and its not as open as some of the other solutions out there.
  • And yes, Solidfire would have been much cheaper than other more established alternatives out there making the deal sweeter.

I would not go into where Solidfire as a product misses the mark. You can find those details around the internet. Look here and here.

Keeping technology aside, one of the big challenges for NetApp would be execution at the field level. The NetApp field sales team always leads with ONTAP and optimization of ONTAP for all-flash would make it difficult for the Solidfire product to gain mindshare unless there is a specific strategy put in place by leadership to change this behavior. Solidfire would be going from having sales team that woke up everyday to sell and create opportunity for the product to a team that historically hasn’t sold anything other than ONTAP. Hopefully, NetApp can get around this and execute on the field. At least that’s what Solidfire employees would be hoping for.

What’s next for NetApp? I can’t remember but I think someone on twitter or a blog or podcast mentioned that NetApp may go private in the coming year(s). Although it sounds crazy but I think its the only way for companies like NetApp/EMC to restructure and remove the pressure of delivering on the top line growth especially with falling storage costs, improvement in compute hardware, move towards more software centric sales, utility based pricing model and cloud.

From a Tintri standpoint, the acquisition doesn’t change anything. We believe that flash is just a medium and products like Solidfire, Pure Storage, XtremeIO or any product that uses LUNs and Volumes as the abstraction layer have failed to grab an opportunity to bring a change of approach for handling modern workloads in the datacenter. LUNs and Volumes were designed specifically for physical workloads and we have made them to work with virtual workloads through overprovisioning and constant baby-sitting. Flash just throws a lot of performance at the problem and contributes to overprovisioning. Whether customers deploy a Solidfire or a Pure Storage or a XtremeIO, there will be no change. It would just delay the inevitable. So pick your widget based on the incumbent in your datacenter or based on price.

If you want to fix the problem, remove the pain of constantly managing & reshuffling storage resources and make storage invisible then talk to Tintri.

Contact us and we will prove that we will drive down CAPEX (up to 5x), OPEX (up to 52x) and save you time with the VM-aware storage.

Screen Shot 2015-12-22 at 11.41.09 AM

While you are at it don’t forget to check out our Simplest Storage Guarantee here .

Screen Shot 2015-12-22 at 11.41.22 AM

Cheers..

@storarch

 

 

The need for a Game Changer in All-Flash Storage

The all-flash space has been abuzz lately with a slew of vendors announcing new developments:

  • Solidfire announced new nodes, a software-only implementation (which oddly comes without complete hardware freedom) and a new program around its Flash Forward guarantee.
  • Pure Storage announced an update to its Flash Array lineup and a program around Evergreen Storage.
  • HP announced its 20K 3PAR line up, basically a hardware refresh.
  • EMC announced software updates to XtremeIO and a lot of other flashy stuff in ScaleIO and DSSD (typical of EMC to think ahead and have multiple bets).
  • NetApp re-launched All-Flash FAS with new pricing to complement the rich data services that ONTAP brings to the table, and has been pounding its chest about how ONTAP is the best thing to happen to all-flash arrays.

(Time will tell what happens to Flashray which is apparently being positioned in a different category (cheaper, simpler to use). Going by my experience, it’ll be a tough sell internally to move sales teams away from selling ONTAP, especially now that they have an optimized All-Flash FAS. (They should thank Gartner for that.) Against popular belief, NetApp has had different products for different workloads in its portfolio (FAS, StorageGrid, E-Series, AltaVault, Flashray) but where it has suffered in my opinion is educating and convincing the NetApp field sales teams to sell anything other than ONTAP. The problem is made worse by loyal NetApp customers who want everything to work with or within ONTAP.)

The Theme

If we look at most of the announcements, we see a unifying theme: Bigger, Faster, Cheaper and Better. This mostly results from new HW technologies (compute), increasing flash capacities and reduction in the price of flash. From a software standpoint, the newer products are catching up to add all the functionalities that traditional products have had. Traditional products (like HP 3PAR, NetApp FAS) are optimizing the code for flash and taking advantage of their already existing data services and application integrations. From a hardware standpoint, eventually every vendor will catch up to each other as they adopt the newer hardware.

Where is the Differentiator?

If we compare the All-Flash offerings from various vendors, most of them have similar features: dedupe, compression, snapshot, clones, replication, LUN/volume-based QoS and some sort of application and cloud integration. Every vendor only does one feature or another better and they struggle to find a big differentiator. When that happens, it’s marketing that starts to innovate more than engineering, and we start seeing messages like this:

  • We provide better space savings (6x vs. 5x) (yes, that’s around 10% better)
  • Our space savings technology never goes post-process (okay, but the other vendor is 10% better for savings)
  • We provide Evergreen Flash (marketing spin on a creative sales rep doing something at the time of a refresh – made even easier by flash)
  • Our Flash Forward program is unique in the industry (another marketing spin)
  • We are the only vendor that provides cloud integration (not true)
  • Designed from the ground up for flash (flash is a medium and traditional products can be optimized for flash—but faster performance/response times or longevity of flash doesn’t necessarily need a ground-up design in all cases. I am saying this even though the Flash layer and spinning drives have a completely different block layout on the Tintri VMstore with the flash layer designed specifically with Flash in mind)
  • We have the cheapest flash solution (when nothing works, talk price)

Running out of ideas?

It’s like everyone is running out of ideas. None of these vendors have taken a “completely different” approach—and their product can be better than others’ only for a limited time. Eventually, everyone will catch up to each other. If you take the same road your competitors do, your results can’t be much different.

But we can’t expect traditional vendors to take a different approach, unless they’re developing a new product without any baggage. But younger product companies definitely have a chance to be different. Still, most of these younger companies have taken a safe approach based on 30-year-old constructs and abstractions that are not required in the modern datacenter—mainly LUNs, volumes that have challenges associated with them. These constructs worked great for some of the traditional workloads but they require a lot of assumptions to be made for architecting storage in a modern datacenter (RAID Group size, Block Size, Queue Depths, LUN/Volume sizes, Number of LUNS/Volumes, number of workloads per LUN/Volume, grouping based on data protection needs etc.). Modern workloads are no longer tied to LUNs/Volumes which also poses a huge problem especially for architectures that are designed with these constructs in mind.

Now, because the traditional vendors and the younger vendors used the same approach, it has become a contest between the two – Traditional vendors are trying to optimize their product for flash, and newer vendors are trying to add functionality to match up to that of traditional vendors. As I see it, the scale is heavier on the traditional vendor side as far as storage with a traditional approach is concerned—because instead of changing the game, the younger vendors decided to play the game of traditional vendors.

 

Innovation focussed on Storage MediaNeed to be Different, not Better

Historically, the startups that make a difference are the ones that take a different approach. Data Domain, for example, defined a new model for backups. Even NetApp took a filesystem approach to storage (for file and block), enabling a completely different implementation of technologies like snapshot, clones, dedupe (primary storage) etc. Now everyone has started to have some sort of filesystem layer, and have caught up to the extent where the lines are all blurred out. NetApp is feeling the pressure now, but it took a long time for vendors to get there. There are many other examples, including ones even outside the storage industry (think Uber, Airbnb, Facebook).

While starting out different is great, it is important for any vendor to stay different and keep reinventing itself (through acquisition or innovation) based on changing needs. They should not get bogged down by a “things are working well, why change anything?” mentality.

Being different changes the possibilities and gives a chance for products to stand out. It allows companies to change the game and the table stakes. It allows companies to ‘change the experience’ which is what we use to evaluate any product.

As far as the all-flash market is concerned, there is a need for a product with a different approach. A product that can change the game and bring new possibilities. The need is for something designed from the ground up for the modern datacenter (and modern applications), rather than something that is just designed from the ground up for flash. Flash is just a medium, and mediums change. It’s Flash today, it may be something else tomorrow.

Cheers,

@storarch

Converged Infrastructures – Trying to Cure the Symptoms not the Disease

Converged Infrastructures (CIs) and Reference Architectures like vBlock from VCE, VSPEX from EMC and FlexPod from NetApp have seen a lot of growth in the last few years. The growth did not come as a surprise to anyone given the perception around CIs solving some of the big IT pain points around Sizing, Architecting, Standardizing and Faster time to market.

The big benefit that Vendors promised with CIs was that it would reduce time required for making infrastructure ready to provide service by reducing or eliminating the time needed in various phases like Architect & Size, Detail Design, Deployment and Test.

Reality

But what was the real need for CIs and reference architectures?

Continue reading

The Potential of Server Side Cache

Flash has changed the way the Storage is Architected in the Datacenter. The fear of using higher capacity drives for high performance applications is the thing of the past. I remember having conversations with customers up for tech refresh and their concern around refreshing the hard drives supporting their applications. The use of Flash in different forms has taken care of those concerns. What we are striving for now is use of just two tiers in the datacenter – Flash (or equivalent) and SATA

Continue reading

An Innovative way to backup to tapes

Disk to Disk backups have become popular these days but if we exclude some exceptions, tapes are still the norm when it comes to long time retention and that is what we are going to talk about here. So here is a blog post on Tapes for a change.

How many times have you been in a situation to restore a backup that was more than a few months old and you could only restore the monthly backup? A request to restore a version of data that is more than a few months old is rare but when it comes, it is important and some times could have financial implications attached to it.

Continue reading

The best part of Clustered ONTAP – No Striped Volumes/LUNs

I had a great last week and the highlight of my week was a customer saying “This is too good to be true” after a Clustered ONTAP presentation and then the same thought crossing my mind during an ONTAP roadmap session. There are great things coming in not so distant future which I can’t share just as yet. You can surely expect NetApp to further raise the level of the newly created Scaleout Unified Storage Market just like it did with Unified Storage.

Coming back to what I wanted to discuss. One of the things that some customers/partners bring up about Clustered ONTAP is that if the NetApp LUNs/volumes are limited to a single node or if they are striped across the nodes.

Continue reading

New in Clustered ONTAP – Improved Single File Restore (Think quick VM level recoveries)

I met a relatively new NetApp customer a few weeks back for discussing the best practices around vSphere and NetApp. While going through the some of the stuff, he brought up a point around VSC Backup & Recovery (SMVI) that when he tried restoring a complete Datastore from a NetApp Snapshot backup, he found it to be much faster than restoring a VM from the same Snapshot. I explained to him that while the Datastore restore utilizes SnapRestore which simply reverts the pointers of the volume to a previous point in time resulting in near-instantaneous restores, the VM restore utilizes something called Single File SnapRestore (SFSR) which copies back the files from the snapshot copy to the Active File System.

So the time taken to restore a single VM is dependent upon the size of the VM. I also shared with him a great workaround to achieve instant restores of VM by mounting the backup through VSC, adding the VM in the mounted backup to the inventory, powering it on and using Storage vMotion to move it wherever one wants. My colleague Keith Aasen (also a fellow Canadian) has documented the process here https://communities.netapp.com/docs/DOC-10862

While the above process is great for instant restores, wouldn’t it be nice if the SFSR process itself was faster?

Why not?

Continue reading