Best practices or obsolete practices

The IT team does a great deal of work to ensure data is protected from threats such as natural disasters, power outages, bugs, hardware glitches, and security intrusions. Many of the best practices for protecting data that we use today were developed for mainframe environments half a century ago. They are respected by IT professionals who have used them for many years to manage data and storage, but some of these practices have become far less effective in light of data growth realities. Some best practices for protecting data are under pressure for their costs, the time they take to perform, and their inability to adapt to change. One best practice area that many IT teams find impractical is disaster recovery (DR). DR experts all stress the importance of simulating and practicing recovery, but simulating a recovery takes a lot of time to prepare for and tends to be disruptive to production operations. As a result, many IT teams never get around to practicing their DR plans. Another best practice area under scrutiny is backup, due to chronic problems with data growth, media errors, equipment problems, and operator miscues. Dedupe backup systems significantly reduce the amount of backup data stored and help many IT teams successfully
complete daily backups. But dedupe systems tend to be costly, and the benefits are limited
to backup operations and don’t include the recovery side of the equation. Dedupe does not change the necessity to store data off-site on tapes, which is a technology that many IT teams would prefer to do away with. Many IT teams are questioning the effectiveness of their storage best practices and are looking for ways to change or replace those that aren’t working well for them anymore.


Doing things the same old way doesn’t solve new problems
The root cause of most storage problems is the large amount of data being stored. Enterprise storage arrays lack capacity “safety valves” to deal with capacity-full scenarios and slow to a crawl or crash when they run out of space. As a result, capacity planning can take a lot of time that could be used for other things. What many IT leaders dislike most about capacity management is the loss of reputation that comes with having to spend money unexpectedly on storage that was targeted for other projects. In addition, copying large amounts of data during backup takes a long time even when they are using dedupe backup systems. Technologies like InfiniBand and Server Message Block (SMB) 3.0 can significantly reduce the amount of time it takes to transfer data, but they can only do so much.

More intelligence and different ways of managing data and storage are needed to change the dynamics of data center management. IT teams that are already under pressure to work more efficiently are looking for new technologies to reduce the amount of time they spend on it. The Microsoft HCS solution discussed in this book is a solution for existing management technologies and methods that can’t keep up.

Source of Information : Rethinking Enterprise Storage

No comments:

Cloud storage is for blocks too, not just files

One of the misconceptions about cloud storage is that it is only useful for storing files. This assumption comes from the popularity of file...