Yesterday, the post was about cyber-attack recovery. Talking about the need for a more comprehensive recovery mechanism and approach than just “restore to the most recent backup.” As mentioned, this also applies to failover systems. In many cases, people will put in failover systems (or cloud-based services that are highly unlikely to go down) and figure they’re covered.
But the point of the post was that cyber-attack-type recovery is more complex than this. Often times you’re looking at recovering to a SPECIFIC point in time. NOT necessarily the most recent point in time. These are very different processes and require different mechanisms in place for both you and your user base.
John wrote: “I think the term cyberattack should be broadened a bit. Just because your system isn’t down doesn’t mean you haven’t been attacked. If your data is extracted or your traffic is being read or rerouted through places you don’t know about while your system is still up, you’ve still been attacked. You just don’t know it yet. If your auditing is nonexistent, you have no way of knowing/proving you’ve been hit.”
This is really the point. Even with full auditing in place, it’s very easy for recognition and confirmation of an issue to take place. So many times you start to see an indication of an issue, then look further to see what’s up. During this investigation time, your systems are still online, still taking in additional information that may be about to be jettisoned in the recovery process. Ironically, that may be a *best* case scenario. It may well be that there is a very significant bit of information to be recovered or removed.
The point in all of this is that recovery models must change. Security must change. Auditing needs to be implemented wherever possible. Controls need to be in place.
The recovery model isn’t just a single solution and process. At the very least, it’s a “recover the world” or “recover to a specific known good point in time” option. Those have very different implications.
For security, check out firewalls, encryption, the works. It takes layers to do everything you can, on top of being aware as applications are created, deployed and used.
Auditing can help as well – as well as some tools available in the cloud to recognize suspicious behavior. For example, Azure has options that can work to detect operations that look suspicious and let you know about them. Depending on what’s going on, you may be able to stop the issue and prevent damage, or at the very least you’ll have an early heads-up and won’t be left to discover the problems on your own.
And don’t forget that part of the security and awareness bit is the manual piece. Educating people about what to watch for, what could be going on, what types of things are questionable and those types of situations that should raise a warning flag can help increase awareness and decrease response time windows.
It’s quite an intricate web of protection that is needed for your systems, but it’s so much better to do it before it’s needed, and have it ready, than after, and have to figure it out in real-time.