Editorials, Security

When Does AI Start Threatening Data Security?

There was an interesting post in the Daily Mail about AI and phishing attacks.  Essentially, they were saying that future cyber attacks for phishing and such could be driven by AI.  AI could get so intelligent that it was able to assume the identity of people you work with and are used to corresponding with, could mimic them based on the history of communications in your system, then use that mimicry to get you to provide information and such to help reach their nefarious goals.

Here’s a link to the post.

You have to wonder when AI will begin to target data in systems.  This could take the form of modifying data, but not enough to trigger any kind of alert.  Or changing access by requesting access updates.  Or more direct hacks that can learn from the massive cache of information known about the users of our systems.  This isn’t a “shout from the rooftops and burn the bridges” kind of thing, but we already have the bot networks that can brute-force their way into things in many cases.

If you were to take a bit of a softer approach – you could learn the behavior of users, almost sitting back and watching.  Learning their access times, their approach, the things they need access to, all of that.  Then on a more general level, you could pull that information together across users.  That aggregated information would be a treasure trove of behaviorally interesting information that could be used to create phishing attacks, direct access attacks and even malicious data modification and access that would LOOK like the users that had been “studied” but would, indeed, be an AI-based bug.

Multi-factor authentication certainly addresses some of this.  If you’re required to work through that to gain access, it could mitigate opportunities for access.  But then we have to review how we allow people to access systems, modify information, pull data, etc.  Because if you log in once a day, and then you’re in and out of the application, modifying things, querying, etc. – you could easily have a window of access that could be compromised.

With data systems specifically this probably points to needing to constantly tune the “least possible access” requirements that are put in place.  If there are limited views, controlled access and update points and other things in place that manage available behaviors and options, that may help limit the access.  However, if you have systems where a user is legitimately updating data and then they are somehow used to train this AI type of bug, even those wouldn’t do much to contain the damage.

This may seem far away at this point in time, but the new regulations coming alive and the speed at which AI is progressing may mean that this is here before it’s expected.  What types of things will be needed to reasonably thwart an attack from a knowledgeable, possibly seemingly authorized, access point on the inside of the company?  That seems to be root question that we may need to be addressing sooner, rather than later.