AI-generated images exploited for new scam campaigns

May 5, 2022
AI-generated Images Online Scam Fraud Prevention Cyberattack campaigns Social Engineering

Scammers are becoming more advanced as technology progresses after security researchers discovered that AI-generated images are being used to conduct fraudulent activities. Based on a report, one victim received a suspicious email from an alleged attorney in a Boston law firm. After examining the email’s sender, it turned out that they are non-existent, and the email’s content is fake, only aiming to scam.

The scammer introduced themselves as a trademark attorney of Arthur Davidson Legal Services. The email’s content explained that the victim had unauthorised use of an image on their blog site owned by one of the attorney’s clients.

Furthermore, the email included references to Section 512(c) of DMCA, explaining the receiver’s violation and threatening to take legal action since crediting the image’s owner on their blog site is supposedly too late to be done.

In fear of being penalised by the said accusation, the victim started to insert credits into the image they used on the blog site. Later, after going back to the email sent by the alleged attorney, he noticed that they attached a link to an image-sharing platform Imgur – a site that allows anyone to upload images even without user accounts.

It has become clear that the email could be a scam since the victim had realised that the image they used on the blog site was from a license-free stock photo library and had not violated any copyright. The said attorney had not responded after the victim sent some proof.

 

The fake attorney utilised AI-generated images as profile photos on the made-up website to pretend as a person who does not exist.

 

They then decided to verify the email sender’s identity, only to discover that the attorney they had been talking with was not real, and the website they used for the law firm was only set up last February 2022, despite claiming that the law firm was established on 2009.

There had been platforms online that many scammers could leverage in their fraudulent operations. These AI-generated images are faces of non-existent people that scammers could find useful in propagating attacks and faking their identities to extort money from their victims.

Security experts warn people about these scamming campaigns since it is easy to fall victim to the improved social-engineering tactics that threat actors execute. It is highly recommended to do in-depth research about any suspicious entities that introduce themselves in your inbox and check their companies’ backgrounds for legitimacy.

About the author

Leave a Reply