Had enough change yet? It is almost amusing that I and some other tech people had that conversation a decade ago, and since then, nothing has slowed down. Our conclusion was that, for the near future, change would be constant. That conclusion was right; I’d have to ping everyone else to see what they think, but “near future” turned out to be longer than I expected.
Inevitably, though, change requires taking a breather at times. It appeared that high-tech was finally ready to admit that burning out your staff was not a viable long-term business model, and was slowing down … and then, COVID-19 hit. Now I’m hearing rumblings about going back to non-stop change as if the last couple of years were a vacation and staff is all rested up and ready to replace massive chunks of the infrastructure.
The reason that change requires intermissions, though, is not 100% about staff and burnout. That only shows up when you’re making a ton of change non-stop. Before staff burnout becomes an issue, recovery and business continuity plans become outdated. At the rate we’ve been running, I’d go so far as to say they are likely useless.
Yes, all this change has brought us some cool stuff. While we are busier, some of the busywork is far easier, or even non-existent now—but that doesn’t change the fact that you need to breathe. This is a perfect time to go over plans to keep the business on its feet in case of IT catastrophe—or at least some major systems catastrophe. Saying “We’ll just deploy a new version” is cool, but it only works in some scenarios. I touched on this and discussed one case where it doesn’t work in October 2021. But there are a ton more. Every environment is different, but hardware failures have not disappeared and destructive writes still occur, as just two more examples. Your infosec team can give you a few more examples.
Knowing what you’re going to do when the worst-case scenario happens is huge. And for those who’ve never been through it, knowing before people are stressed and asking you for a timeline will save you a lot of headaches.
For those who’ve never had a DR or business continuity plan, consider it. At least have a basic DR plan that lists critical systems, things most likely to go wrong and what the steps are to recover it and get things moving again. The process of making such a list will often uncover things that should be done but aren’t—like backups or backup verifications.
For those with a DR or business continuity plan, I’ll take a wild shot and say, “It’s out of date.” Organizations tend to create them and then forget about them. These plans should involve a bit of each project, so it’s taken care of in bite-sized chunks, but—probably because the first implementation of DR or BC is normally a massive project—it tends to be set aside and revisited every few years. Don’t do that.
As I’ve said before, you and yours have built the heart of the modern enterprise. Don’t forget to protect it. It doesn’t take much to use some of these sweet new technology advances to put in DR procedures. Tack DR review/updates onto each major initiative, be it a new product or a big change to existing software. And keep rocking it. If you have a say in your organization, consider slowing the pace down for a bit and even updating DR. If you don’t have a say, may your employer be wise enough to do so.