In the last Blog I talked about how NetApp along its journey, has contributed to Software Defined Storage. One of the functionalities I talked about was vFilers. vFilers are logical constructs that abstract the Data Management from the hardware making them Secure and Portable. One of the cool functionalities that vFilers have is the ability to failover to another Storage controller on the same or another site.
One of the big challenges around DR is that, when executing a DR plan one has to execute multiple steps. Eg: Bringing individual volumes/LUNs out of Read Only Mode on the DR site, changing network settings, doing server specific tasks etc. Most of the times these are all done manually. It gets even more complicated if you have File Shares hosted. With file shares, one has to additionally create shares, users, policies on the other site or keep them updated all the time, which could be very cumbersome. With a planned failback or a planned failover, one has to also make sure that individual volumes are updated up to the last minute. This can not only turn out to be a long process but error-prone too affecting the Recovery Time Objective or the recovery itself. Thus affecting the business.
vFilers to the Rescue
A vFiler contains Volumes, LUNs, Logical IPs, Shares, Users, Groups and Policies to name a few. Think of it is as a storage by itself where Data ONTAP acts like a Storage Hypervisor and the vFilers as the storage VMs. One of the unique features of the vFilers is vFiler DR. A vFiler DR helps to failover a vFiler from one Storage to the other with a single click. Think of it as teleportation where the vFiler disappears from one site and appears on the other site with all its contents, properties and characteristics.
The way it works is that when you setup a vFiler DR initially, it creates all the volumes, shares, users etc. on the destination and keeps them in sync through SnapMirror (NetApp’s Replication Software). One can setup the sync interval at individual volume level based on the Recovery Point Objective (RPO). It also allows one to specify any changes in IP addresses, DNS, NIS and AD that you want it to make when failing over to the DR site. Once setup, you end up having two vFilers – one at the source in Online state and the other at the destination in Offline state. It monitors any changes on the source and syncs them. It also alerts if any new volumes have been added to the source and either let’s you create it or creates them on its own with the necessary sync interval.
At the time of an unplanned outage, all the user has to do is activate the vFiler on the DR site which does the necessary changes to the vFiler (IP address, DNS, AD etc.) on the DR site and brings it online up to the last synced state. For planed failover or failback, the user has an option to initiate resync of the vFiler (including volumes/LUNs, users, shares and other configuration) before activating the vFiler on the destination. Now, compare it to any traditional deployment where one has to update all the relationships manually.
You may think, how great it would be to make this an online process where none of the applications have to be taken down. Why Not? We can do that too provided you can foresee a disaster and meet the distance limitations. Here is a demo that I shared in my previous blog that demonstrates exactly this.
Looking at the current trends, this has a big use case in Hybrid Cloud environments apart from Public and Private Clouds (and the traditional use cases).
Who said Teleportation is difficult? We made it possible in the storage world a long time back. No one had even thought about the possibilities of a Hybrid Cloud back then.
Wait a minute … No one?
Edit: Removed one of the demo videos where I used a beta functionality which didn’t make it to GA. The blog’s content is still applicable.