It is without question that any task, project, how big or small it can be, has to start with careful planning. In terms of disaster recovery, I have discussed planning and how to analyze existing plans in my previous article. However, the disaster recovery plans have to have a proper disaster recovery tool that can satisfy the company’s disaster recovery requirements. Choosing the right software can make disaster recovery “an inconvenience that is overcome by executing the plan carefully” or can make it “days of painful, sleepless, restless work for the system administrators that results in too slow recovery” – or worse, some parts/data not recovered at all.
To start with, legacy systems are not good fits for disaster recovery solutions. Since the speed of recovery is one of the two most important point in disaster recovery (the other is the consistency of data) file and folder level backups are slow to take and slow to restore. Today, almost all businesses virtually operate at 100% availability. In such a business environment, a legacy backup tool simply will not satisfy the requirements.
Beside the speed, legacy solutions do not cover the other half of the computing infrastructure: operating systems and applications. That means, in the face of a disaster all servers will be rebuilt because bare metal backups are not available. Even if they are, they need to be restored to identical hardware. In almost all tools, restoring to dissimilar hardware requires additional licences. Therefore such unnecessary complications should be well thought of and has to be embedded in the disaster recovery software. Nobody will want to see the “license required” dialogue in a disaster recovery situation.
Another similar scenario involves the hardware that will be purchased and used after the disaster. It is not possible to know in advance which server will be restored to where – circumstances may dictate to restore a virtual server to a physical server or vice versa. Therefore the solution has to work seamlessly across various infrastructure setups.
To reduce the time to recovery image level backups come handy. Image level backups allow to cope with the toughest business requirements if deployed correctly. The first step to a correct image level backup is to ensure that the virtual environments are under control and there is no information/data sprawl. To help to address this issue, deploy compression and deduplication. This will help to reduce recovery (and if deployed replication) times by using smaller data sets. When doing this, the application has to be flexible in terms of recovery – system, folder or file level and in terms of direction – physical to virtual, physical to physical in a dissimilar hardware setup. And the system has to automatically verify the backed up data for recoverability. Nobody has the time and resources to test every single backup by restoring it but this does not mean a “backup successful” message is enough to sleep comfortably. The system has to verify the backed up file automatically to provide a level of relief. However the backups should be randomly restored to see everything is OK.
Many companies operate in a hybrid environment of physical and virtual servers and of various operating systems. Given this complex environment, the disaster recovery solution should be flexible enough to support any backup-restore scenario. Legacy backup tools that provide file, folder and even image backups work fine up to a point but when there is the cloud scenario involving fabrics, load balancing, power management features, they most likely will not be able to cope.
Finally, the disaster recovery solution has to be accessed and operated remotely. Traditionally off site locations were used for archiving purposes. However the archiving system did not provide expected results in times of disaster. Therefore the companies thought about making incremental investments to turn these archiving sites into recovery sites. This posed another challenge, which required employing additional personnel to operate the recovery site. Those companies who could afford recovery sites absorbed the costs, for those who could not, we arrived at the cloud solutions. The rise of the cost effective cloud solutions enabled the companies with technologies which could not be imagined a few years ago.
The cloud solutions profoundly changed how companies thought about disaster recovery. However it works best in an incremental, image-level backup model. This ensures fastest recovery in disaster time. If combined with deduplication across different sites (preferably global) companies use lowest bandwidth possible, while maximizing disk space usage and minimizing the impact of the local disaster.
When choosing the disaster recovery software, IT managers should not forget that when faced with disaster and where every minute is important, there is no tolerance for seeking additional licences, productivity losses, corrupted data, inability to restore, inoperable/inaccessible remote site. If those basics are not covered, the disaster recovery tool will not deliver the expected results in a real disaster.
- Featured image: www.ihotelligence.com