The recent surge in AI-generated action figure trends across social media platforms presents a unique intersection of entertainment and potential cybersecurity risks. Millions of users, including celebrities, government agencies, and even sports teams, are enthusiastically transforming their selfies into personalized, Barbie-like or action-style figures using AI tools such as ChatGPT. While seemingly harmless, experts are warning that this trend could expose individuals to various cyber threats, ranging from identity theft and fraud to the creation of sophisticated deepfakes.
One of the primary concerns revolves around the vast amount of personal data being willingly shared with AI platforms. To generate these hyper-realistic figures, users upload photos and provide prompts detailing aspects of their lives, including hobbies, professions, and interests. This data collection poses a significant privacy risk. As Eamonn Maguire, Head of Account Security at Proton, points out, this practice "opens a Pandora's box of issues," as users lose control over their data and how it will be used. AI companies can utilize this information to train large language models (LLMs), personalize ads, or generate content, potentially influencing critical aspects of an individual's life, such as insurance coverage or lending terms.
Furthermore, the detailed personal and behavioral profiles created by these AI tools can be exploited for malicious purposes. Cybercriminals can leverage this information to craft more convincing phishing emails, social engineering attacks, or even impersonate individuals online. The Independent reports that participants in the trend often generate images featuring items referencing various aspects of their life, whether it's where they live, what they do for a living, or a favorite pastime which could help scammers trick people down the line. Dave Chronister, the CEO of cybersecurity company Parameter Security, told HuffPost that, "The fact that you are showing people, 'Here are the three or four things I'm most interested in at this point' and sharing it to the world, that becomes a very big risk, because now people can target you".
Another significant threat lies in the potential for data breaches. AI companies, like any organization handling large volumes of sensitive data, are vulnerable to cyberattacks. Recent incidents, such as the security lapse experienced by DeepSeek, where a database of user prompts became publicly accessible, and OpenAI's exposure of sensitive user data due to a third-party library vulnerability, highlight the risks involved. Should hackers gain access to the personal data and images uploaded for AI action figure generation, they could exploit this information for identity theft, fraud, or even create deepfakes for political propaganda or online scams.
The creation of deepfakes is particularly concerning. AI-generated images and videos can be manipulated to depict individuals saying or doing things they never did, potentially causing significant reputational damage or even legal issues. Peter Salib, AI expert and UH professor, brings up the example of someone creating a fake – and possibly harmful – image using a person's photo. “You can imagine that someone who doesn't like you takes your image and uploads it... not [to] turn this into a cute action figure, but... make me a picture of this person committing a crime.” Although major AI companies are actively trying to prevent this kind of misuse, the risk remains real.
To mitigate these risks, experts recommend several precautions. First and foremost, users should carefully review the privacy policies of AI-powered tools before uploading any personal information. It is crucial to understand how data will be used, stored, and whether it will be shared with third parties. Limiting the amount of personal information shared is also essential, avoiding the upload of sensitive photos or details that could be exploited, such as addresses or financial information. Using generic images or landscape photos instead of high-resolution close-ups of faces can also help protect against biometric profiling.
Moreover, users should be cautious with permissions granted to AI platforms, only providing necessary access and being wary of excessive requests, such as access to contacts or location. Regularly monitoring financial and social media accounts for unusual activity is also advisable, as shared data could be used for unauthorized access.
In conclusion, while the AI-generated action figure trend offers a fun and creative way to express oneself online, it is crucial to be aware of the potential cybersecurity risks involved. By taking necessary precautions and prioritizing data privacy, users can minimize their exposure to these threats and enjoy the benefits of AI technology more safely.