Executive impersonation on social media is at an all-time high as threat actors take advantage of AI to improve and scale their attacks. In Q3, accounts pretending to belong to high-ranking executives on social media climbed to more than 54% of total impersonation volume, surpassing brand attacks for the first time since Fortra began tracking this data. The volume and composition of these attacks strongly indicates they are crafted using generative AI.
AI and Social Media
In the second half of 2023, impersonation attacks as a whole grew to represent the number one threat-type on social media, according to Fortra’s Social Media Protection Solutions. These threats manifest as fraudulent stores or brand pages, with the majority of the volume materializing as fake executive profiles (see graph above). Fake accounts are easily created via AI, with tools capable of generating the entire lifecycle of a scam.
“These modern social impersonations are becoming more sophisticated and realistic,” cites Omri Benhaim, Director of Social Media Intelligence at Fortra. “Executive threats in particular pose a danger to brands, as today’s consumers use social platforms to research organizations and specifically, reach out to individuals considered the face of an organization.”
The role of AI as it relates to social media was recently highlighted in Fortra’s Brand Threats Masterclass, where Benhaim stressed the growing presence of AI misinformation campaigns. “While it’s not always financially feasible for scammers to use advanced AI to do everything they need to do, there are tools – legitimate ones – that are frequently used to manage activity, leaving the actor to typically focus on account creation and one-to-one interaction.”
By automating much of these attacks, criminals are better able to execute large-scale fraud with the same or greater efficiency than they would through manual means.
Executive Impersonation on Social Media
Threat actors lean heavily on the assumption that figureheads will influence victims to perform specific actions based off their posts. Consequently, they use AI to create and post realistic images or videos with offers or giveaways. These posts usually include instructions to perform a specific action such as clicking on a link or to encourage communication with the fake executive through direct message.
Below is an example of a fake executive account impersonating the CEO of a global financial institution. The profile contains a professional photo and “about” information that is a near exact description of the executive.
Malicious pages and posts appearing identical to that of a legitimate executive are often indistinguishable from one another due to AI’s ability to mimic imagery and speech. This leaves brands scrambling to mitigate the fallout of an attack, which can range widely due to the broad user base and rapid nature of communication indicative of social platforms.
The most critical components of executive impersonation attacks are the posts. If the intent of the actor is ultimately to communicate with the victim, any comments will be replied to either by AI or the actor themselves. By replying to the comment, the account is further legitimized and the user will then be encouraged to engage in direct messages.
The below is a fake executive account impersonating the CEO of a global financial institution. The profile contains links to bitcoin sites, with the intent to lure unsuspecting users to visit the provided destinations, based on the recommendations of the fake executive.
The window of time that a social scam is effective is small, and if the legitimacy of a post or a direct message comes into question, everything connected to that account has the potential to be discredited. As a result, it is in the actor’s best interest to create content that appears well-established and validated by others.
One way they accomplish this is by creating private accounts. This allows the actor to age the fake account over a longer period of time without public interaction or interference. Additionally, they may use AI to create subsequent fake accounts for the sole purpose of commenting on posts associated with the original account. These additional profiles are easily stood up using AI-generated copies of real people, making them effective at misleading users and security controls.
Executive Impersonation by Industry
Executive impersonation pages will look different depending on the industry that is targeted. For instance, attacks on financial institutions will focus on stealing credit card information, account data, or taking money directly from your account. Retail is a prime target for counterfeit, as threat actors recognize that consumers increasingly research brands and product lines through the lens of influencers on social platforms. Other popular industries targeted include cryptocurrency, computer software, and ecommerce.
Situational Risks for Executive Impersonation
Security teams should be aware of whether or not an executive has an active presence on social media while simultaneously monitoring for impersonation pages. Common risks associated with executive pages can directly or indirectly involve bad actors. The following situations fuel confusion and can be prevented on the part of executives and brands themselves:
Executives who have moved to other jobs or retired, yet still own active pages associated with their former organization.
- Victims will see former logos and brand information which adds credibility to that executive’s former position and continues to connect them with their former brand.
Executives who fail to have an active profile.
- This can be particularly damaging, as consumers will not know where to find legitimate information associated with the brand. Imposters are capable of filling this role.
Identifying AI-Generated Content
Identifying AI-generated impersonations are challenging as evidence of fake content is not black and white. At its core, AI is designed to mimic human behavior, which is riddled with nuances. While AI detection technologies do exist to some degree, these tools largely excel in identifying instances of plagiarism in copy or images. And much of this technology is still incapable of determining with 100% certainty whether or not AI is the culprit.
Explained Benhaim in the Brand Threats Masterclass, “We are hoping to see more technology come out to help identify deep fake videos because it is fairly difficult at this time, and social media platforms in particular are not very good at identifying them. The burden of definitiveness is ultimately left to human experts trained in the peculiarities of AI as well as human communications.”
AI identifiers include:
1. URLS for accounts with misspelled or manipulated names for executives, such as:
- https://www.instagram.com/xxx_efra137/
- https://www.instagram.com/xxx_efra485/
- https://www.instagram.com/xxxx_fasrer_7878/
- https://www.instagram.com/xxxx_fr14/
- https://www.instagram.com/xxxx_fra116/
- https://www.instagram.com/xxxx_fra16/
2. Multiple accounts using the same photo of the executive:
3. A canned phrase in the profile of the account that is repeated in other accounts:
Other identifiers that security teams should look for:
Copy
- As a general rule, posts and conversations will lack emotion or empathy traditionally characterizing human speech.
- English is too perfect
- Repetitiveness or repeating the same response to questions. Generally, this behavior is more indicative of a bot. AI is getting better at not repeating.
- Lacking the ability to identify/use common human idioms such as slang, shortform, or buzzwords
- Plagiarism
Images/Videos
- Fuzz around humans
- Anomalies within the image
- Identifiers within the metadata such as location, date/time created
- Metadata may show if the image was edited by a program
- Visual patterns that repeat artistic styles, making images too perfect
- The size of objects in comparison to individuals may be too small or too large.
Threat Creation
Threat actors can abuse legitimate AI tools to launch online threats just as easily as content creators can apply them to benign social media campaigns. There are many tools leveraging artificial intelligence that automate the generation of creatives and tasks on platforms such as Meta, LinkedIn, Twitter, and more. These tools can connect social channels, turn content ideas into multiple posts, generate video and text, schedule campaigns, and more.
Examples of legitimate software that may be abused for malicious purposes:
- FeedHive
- Vista Social
- Buffer
- Flick
- Publer
AI chatbots are also broadly advertised across dark web forums as uncensored large language models capable of bypassing potential restrictions held by services like ChatGPT. These non-restricted chatbots are offered for monthly, yearly, or lifetime subscription fees and, on occasion, for free. These chatbots can assist with many malicious activities including:
- Generating malware
- Ransomware
- Writing language for phishing emails
- Creating phishing pages
- Detecting vulnerabilities
According to Nick Oram, Operations Manager for Fortra’s Dark Web & Mobile App Monitoring Services, the claims made by these tools cannot be confirmed, nor can we determine with 100% certainty their effectiveness in creating working malicious tools or services.
“However, it is important for cyber security research teams to be cognizant of the threats posed by AI chatbots and the actors exploiting these tools for fraudulent endeavors,” adds Oram. “These tools will only continue to become more sophisticated.”
Below is an example of an AI chatbot advertised over a popular dark web forum.
While a successful brand is expected to not only have a presence on the top five platforms (TikTok, Facebook, Instagram, Twitter, and YouTube), they are also expected to have brand and executive pages that actively engage with consumers. Criminals, as a result, focus their efforts on impersonating and driving communication on these same platforms. At a minimum, this breeds confusion. At its worst, it can mean lost funds and damage to brand reputation.
Learn what Fortra experts have to say about AI abuse in social media in the Brand Threats Masterclass.