
In my last post, I talked about secondary datacentre options (point 5 below). Let's continue our look at highlights from the roadmap for Business Continuity & Disaster Recovery deployment that I see emerging based on shifts in IT architecture thinking and advancements in technology (hyperlinks are to the earlier discussions in this series of blogs): 1. Identify critical apps and establish RTO/RPO objectives 2. Re-centralize branch infrastructure to primary datacentre 3. Virtualize /consolidate your server environment 4. Build high-availability into your IT environment locally 5. Establish a secondary datacentre site (internal or third party) 6. Deploy automated failover technology (e.g., scripting) 7. Optimize the pipe linking primary and secondary datacentres 7. On Optimizing the Inter-Datacentre Pipe. I think IT people understand what SAN-to-SAN replication is–from primary datacentre to secondary–and I see they already know that to achieve a Business Continuity objective of 99.999% uptime, replication must be almost instantaneous, every time the Enter key is hit. But I also see that people aren't very happy about the resulting exponential increase in networking cost due to the big, fat 'pipe' this requires between datacentres, especially after expending considerable effort to reduce cost by centralizing and consolidating their server infrastructure. As a result, there is considerable focus these days on optimizing that pipe. An excellent place to start is with data de-duplication–at both ends of the pipe. While data de-duplication is now considered a strategic technology with widely acknowledged benefits, many organizations still haven't looked at it and what it can do for them in terms of dramatically reducing network traffic and bandwidth utilization, thus reducing pipe cost. There is also a strong play in new technologies that will 'shape' the network traffic and reduce the 'noise' (i.e., low-priority traffic such as Web surfing and music/video downloads) on the network, thus further reducing monthly pipe utilization and cost. Shaping the traffic ensures that only what you want will go across the network. Next, I suggest you look at technologies such as Citrix NetScaler or F5 appliances that compress the traffic. For example, a 10:1 compression, perhaps in conjunction with using other technology that sends only the changes that have been made in files, rather than whole complete files each time–I think they call it “distributed file management”–could result in what started out as a very large amount of data ending up as only a very small amount of traffic over the network. To summarize the measures I see being adopted by leading organizations to reduce the cost of the pipes connecting their primary and secondary datacentres, try data de-duplication, traffic shaping, distributed file management and compression. In my final post on the new BC/DR roadmap, I'll share some observations on the IT budget side of BC/DR and the shifts in thinking I see happening amongst the companies we deal with. In the meantime, feel free to e-mail me if you have thoughts on BC/DR you'd like to share, or to find out how Compugen can help you cost-effectively navigate the new Business Continuity roadmap.