Social Engineering and AI: The Human Psyche as a Target

Author
Jill Wick
Published
23. April 2026

Share article

Despite modern cybersecurity measures such as firewalls, encryption, and monitoring, social engineering remains one of the most effective methods of attack. Attackers do not primarily target systems; instead, they specifically exploit trust, stress, and a willingness to help. In the age of AI, these attacks are becoming even more sophisticated: deceptively real emails, voices, and chat histories are almost indistinguishable from the real thing. When human perception is hacked instead of systems: where does social engineering end in the age of AI?

The cyberattack on MGM Resorts, for example, began with attackers posing as IT support and specifically imitating helpdesk processes. In Hong Kong, a financial employee transferred 25 million US dollars after a deepfake CFO asked him to do so in a video conference.

Such cases are no longer isolated incidents: with the use of artificial intelligence, the frequency, quality and credibility of these attacks are increasing rapidly and show that attackers are no longer just targeting systems, but are specifically "hacking" the human psyche.

Why social engineering is so effective as an attack vector

Social engineering makes targeted use of psychological mechanisms to influence behavior. Psychologist Robert Cialdini has identified six central principles of influence that cyber criminals also use systematically:

  • Reciprocity: people want to return favors and feel obliged to give something back

  • Commitment and consistency: After making a commitment, people like to act in accordance with their decisions and values

  • Social proof: In uncertain situations, people are guided by the behavior of others

  • Authority: People are more likely to follow people or institutions they perceive as competent

  • Likeability: The more familiar or likeable a person appears, the more likely they are to convince others

  • Scarcity: What appears limited or exclusive is perceived as particularly valuable

Attackers use these six psychological triggers to gain trust with the aim of obtaining passwords, money or sensitive data. Today's social engineering tactics are much more sophisticated than the simple spam emails of the past: fake websites, deceptively real emails or AI-generated video calls now look so authentic that even trained specialists can be fooled.

Next level social hacking: from fake applicant to insider attacker with AI

With the use of artificial intelligence, social engineering attacks are reaching a new level. With the help of deepfake technologies and voice cloning, deceptively real identities, voices and videos can now be created. Fraud attempts can thus be underpinned with apparent "evidence" and appear much more credible than classic phishing attacks.

This development is particularly evident in targeted insider attacks: North Korean actors use deepfake technology to impersonate real candidates in real-time video job interviews in order to gain access to companies. For example, the cybersecurity company KnowBe4 documented a case in which a supposed IT employee was hired whose identity was completely fake and made up of real and synthetic data.

Deepfakes are also increasingly being used for CEO fraud in the corporate context: Manipulated video or audio calls imitate executives in order to persuade employees to disclose sensitive information or trigger payments. The combination of psychological social engineering techniques and AI-generated content makes these attacks particularly effective.

This makes it all the more important for companies to expand their human-centered security & security awareness and prepare employees specifically for AI-supported attack patterns.

We have compiled a compact checklist with 15 practical protective measures and quick tips to help you recognize social engineering attacks in your day-to-day business at an early stage and protect yourself against them.


Social Engineering Quick Tips

Security awareness in the age of AI: why social engineering and deepfakes require new training approaches

The threat situation has changed noticeably. Deepfakes and AI-supported social engineering are no longer theoretical scenarios, but part of the real attack landscape and are actively used against companies. The question is therefore not so much whether organizations will be targeted, but when.

Against this backdrop, security awareness is once again gaining in importance, especially when dealing with AI-supported attack methods. Traditional training formats are often no longer sufficient to realistically reflect the new dynamics.

For this reason, traditional security awareness training is increasingly being supplemented by new, practical approaches to realistically reflect the current threat dynamics. These include, among others:

  • Simulations of social engineering attacks based on deepfake scenarios

  • Web-based training with a focus on AI-supported threats

  • Awareness formats that illustrate the effects of AI in the attack process

  • Interactive training approaches with practical exercise scenarios

The aim of such measures is to sensitize employees to new forms of digital manipulation and to strengthen a security culture that combines technical and human protection mechanisms. After all, both individual awareness and technical protective measuresplay a key role in defending against AI-supported social engineering attacks.

Conclusion: Social engineering, AI and the new reality of the threat landscape

Social engineering has changed fundamentally through the use of artificial intelligence. Simple attempts at deception have become highly personalized, hard-to-detect attacks that specifically target trust, perception and decision-making behavior.

At an organizational level, multi-layered security architectures and AI-supported defence mechanisms are becoming increasingly important. They make it possible to identify manipulated content, suspicious communication and behavioral anomalies at an early stage and prevent attacks before they are carried out.

In combination with zero-trust approaches and continuous security awareness, this creates a holistic protection approach against modern deepfake and social engineering attacks.
One thing is clear: AI-supported manipulation is already an integral part of today's cybercrime and requires appropriate countermeasures at all levels.

Security Awareness Journey

Caption: Image generated with AI

Table of Contents
    Share article