AI-Enabled Social Engineering: How Businesses Can Safeguard Their Customers

Artificial Intelligence (AI) has significantly impacted various aspects of life, including fraud.

Cybercriminals now employ AI to develop sophisticated and automated digital attacks that circumvent traditional security measures, including credential-stuffing, personalized spear-phishing attacks, and social engineering. Additionally, deepfake technology is being used to manipulate images, video and audio files in order to execute even more sophisticated scams.

One of the most common forms of AI-enabled fraud is social engineering. While social engineering scams have existed for a long time, they have become more sophisticated and challenging to detect with the democratization of AI. Social engineering involves manipulating individuals, often by exploiting their trust or naivety, into divulging sensitive information, such as login credentials or financial data. Previously, phishing scams relied on the fraudsters themselves to come up with the outreach templates. As a result, they were often poorly written and obviously fake. However, given wide-spread access to publicly available AI language models, like ChatGPT, bad actors are creating personalized, convincing messages that are difficult to spot, even for digital natives.

Sophisticated social engineering scams typically also involve deepfakes. With the help of AI-powered tools, scammers can create high-quality image, video and audio deepfakes that are nearly impossible to distinguish from the real thing. AI tools, such as natural language processing (NLP), voice synthesis, and image manipulation, make social engineering attacks more effective and easier to execute. With NLP, chatbots can conduct conversations indistinguishable from human interactions, convincing victims to divulge sensitive information more easily. Similarly, voice synthesis technology can create realistic audio recordings of individuals, enabling voice phishing attacks or impersonating high-profile individuals. Additionally, image manipulation tools can create convincing fake identities, making it harder for organizations to detect fraudulent activity.

These files can be used to deceive, manipulate, and exploit individuals or organizations in a number of ways, including impersonation, extortion and identity theft. For example, deepfakes can be used to create videos or audio recordings of high-profile individuals, executives, or employees in order to impersonate them. Scammers can use these deepfakes to send false instructions to employees, authorizing fraudulent wire transfers or revealing sensitive information. Fraudsters may also use deepfakes to create compromising or embarrassing content of individuals or public figures and then threaten to release the material unless a ransom is paid or other demands are met. In more targeted attacks, deepfakes can be combined with personalized spear-phishing campaigns, making the message appear more authentic and convincing. This could lead to victims divulging sensitive information, clicking on malicious links, or installing malware on their devices. Additionally, deepfakes can be used to create fake identification documents, such as passports or driver’s licenses, by replacing the photos of legitimate holders with those of the fraudsters. This can facilitate various forms of identity theft, financial fraud, or other criminal activities.

The growing use of AI in social engineering attacks is reflected in the increasing number of complaints to the Federal Trade Commission (FTC). In the three months following ChatGPT’s release, the FTC saw a significant increase in social engineering complaints, with a 34% increase in reports related to imposter scams and a 50% increase in those related to government imposter scams. These scams often use AI-enabled tools to impersonate trusted institutions or individuals, making it easier for attackers to gain victims’ trust and steal their personal information.

This growing threat highlights the need for organizations to be aware of the risks and take appropriate measures to protect themselves and their customers, starting with robust data protection and malicious software detection solutions to prevent unauthorized access to sensitive data that could be used for scams. Companies must ensure that customers’ personal and financial information is collected, processed, and stored securely, using encryption to protect data in transit and at rest. Ensuring that consumer data is safe from hackers is the number one prevention step businesses can take.

To detect attacks, companies should implement a multi-layered approach to account security. They can use AI-powered fraud detection tools to analyze customer data in real-time and identify unusual behavior indicating abnormal login attempts, unusual transaction patterns, and other signs of fraudulent activity. Emerging authentication solutions that utilize AI and machine learning, like Incognia, can reduce or even eliminate the need for login credentials while offering superior account protection. These solutions focus on distinct behavioral traits of the user, rather than passwords or information that the user knows. New behavior-based approaches provide dynamic and continuous authentication that is extremely difficult to forge or imitate. Implementing these measures can help businesses reduce social engineering risks while minimizing reliance on credentials vulnerable to theft.

The rise of AI-enabled social engineering scams poses a considerable risk to both companies and consumers. These attacks are increasingly sophisticated and can result in financial losses and reputational damage. While user education is important, ultimately, businesses bear the responsibility of ensuring the safety and security of customer accounts. Leveraging new behavior-based solutions that can proactively detect social engineering scams before they occur can help businesses stand out and build trust with their customers. The growing number of complaints to the FTC related to imposter scams and government imposter scams highlights the urgency for individuals and businesses to protect themselves from these attacks. As AI technology continues to evolve, it is essential for organizations to stay ahead of the game and adopt advanced fraud detection and prevention strategies to safeguard their customers and themselves from the harmful effects of AI-enabled fraud.


About the Author

André Ferraz is co-founder and CEO of Incognia. Incognia delivers spoof-proof location verification used by trust & safety teams to verify and authenticate customers throughout their digital journey. Incognia location identity recognizes trusted users from over 200M devices and detects fraudster activities by delivering highly precise risk assessments with extremely low false positive rates.

more insights