Some people may be surprised that social engineering attacks are still common. Don’t people know the risks of responding to Phishing emails? Don’t they apply the cybersecurity training that was mandated for everyone in the company?
Apparently not. In the past year 36% of cyberattacks were initiated with some form of social engineering. Understandably, at this success rate, no wonder it has become a favorite strategy for threat actors. Why go through the effort of having to hack through firewalls, MFA and other security measures when you can simply steal someone’s identity and gain their access instead?
This has become a major headache for companies. Most security systems are set up to validate a user and allow them access to the systems they need to work. Logic says that once validated, there should be no cause for concern. The problem is that this is exactly the thinking that threat actors are using to their advantage, and AI is making it even easier for them.
Friend or foe?
In the past there were “tells” that could indicate a phishing scam. But now with AI agents able to churn out grammatically correct copy and generate code, determining if an email request is real or not has become harder.
But aren’t AI applications used to scan, quickly identify threats, and shut them down? Yes, they are, but it’s one AI against another, and if the AI is trained to let a certain user have access once they’ve been validated, then a stolen identity can slip through just as easily.
This is made worse by threat actors deceiving IT support to gain the level of admin access want. Fatigue plays a part – seeing similar requests for password resets, is bound to make anyone switch off. If it seems normal, access is going to be granted. Also, if new requests for escalated access aren’t flagged as unusual behavior, regardless of who the user is, it’s much easier for breaches to go undetected. Deceiving IT support to gain access is becoming a common strategy.
The bigger problem
Unfortunately for companies, the cost for not being vigilant in protecting against social engineering is climbing. Threat actors who gain access quickly, can also wreak havoc quickly, and the way that systems are interconnected, disruptions can be vast. This goes beyond exposure of critical data.
Think transport systems that are managed through central operations or electricity grids. Even something as simple as un-syncing traffic lights in a city centre can gridlock traffic within minutes – and take hours to resolve.
Closer to home, when company systems go down, there’s the inevitable productivity paralysis, not to mention grumpy customers and reputation loss. Then there’s dealing with the ransom demands that often companies grudgingly pay, just to have their systems restored. Threat actors know this and aim to maximize their gains from attacks.
Keeping a step ahead of threats is now more about identifying possible vulnerabilities involved in user access. The problem is that instituting multiple ways to validate users only serves to frustrate them, as it makes working more complex. The default behavior is to try ignore to work around these measures. Any wonder that social engineering has such a high success rate?