
Image: Wired
Explore the alarming rise of AI face models in global scams, revealing how cybercriminals exploit technology to deceive victims. What’s next?
GlipzoIn an unsettling twist in the world of cybercrime, 24-year-old Angel, an Uzbekistani woman, has become one of many individuals applying to work as an AI face model in Cambodia. In a selfie-style video aimed at recruiters, she showcases her impressive language abilities, claiming fluency in English, Chinese, Russian, and Turkish. But rather than seeking a legitimate corporate job, she is offering her skills for a far more sinister purpose: engaging in elaborate scams that target unsuspecting Americans.
Angel's application reveals that she has been involved in the world of AI modeling for over a year, a job that involves making deepfake video calls to deceive potential victims. This alarming trend has been highlighted by a WIRED investigation, which uncovered numerous recruitment videos and job postings on Telegram. The findings indicate a disturbing proliferation of individuals from various countries—including Turkey, Russia, Ukraine, and several Asian nations—eager to become AI or “real face” models, particularly in the scam-heavy region of Southeast Asia.
Sihanoukville, Cambodia, has emerged as a notorious hub for massive scamming operations, many of which are linked to human trafficking. Tens of thousands of victims are reportedly held captive and coerced into orchestrating online fraud schemes, including cryptocurrency investment scams and romance scams. The rise of AI modeling within these operations adds a new layer of complexity to the already intricate world of cybercrime.
According to Hieu Minh Ngo, a cybercrime investigator at the Vietnamese nonprofit ChongLuaDao, these operations are now recruiting individuals specifically for AI modeling roles. He explains that scammers provide software that enables the swapping of faces, allowing criminals to create deceptive personas that can engage victims in romance scams. This strategy has proven to be effective, as the use of AI technology allows for more convincing interactions with potential targets.
Ngo, who has transitioned from being a hacker to a victim advocate, has identified numerous Telegram channels advertising jobs for AI models in scam-prone cities. His research aligns with findings from Humanity Research Consultancy, an organization dedicated to combating human trafficking, which has also documented a surge in applications for modeling positions in these notorious locations.
The integration of AI into scamming tactics is a game-changer for cybercriminals. Traditionally, scammers would use fake identities to lure victims on social media or messaging platforms, often employing stolen images of celebrities or attractive individuals to gain trust. Once a relationship is established, the perpetrators would coax the victim into parting with their money.
However, the advent of deepfake technology has allowed scam operations to take this deception to new heights. If victims request a video call to verify the identity of the person they are communicating with, scammers can utilize AI models who replicate the desired appearance. This method has led to the establishment of dedicated AI rooms within certain Southeast Asian scam centers, where video calls are conducted to maintain the facade.
Job ads for AI models are striking not only for their content but also for the excessive demands they place on applicants. A review of these posts reveals that candidates are often required to work long hours, with some listings indicating the need for up to 100 video calls per day. In some cases, the requirements escalate to as many as 150 calls, with strict guidelines about maintaining a realistic appearance during these interactions.
The ads are often vague, lacking specific details about the employers and how to apply, which raises concerns about the legitimacy of the roles and the potential for exploitation.
The rise of AI models in the realm of online scams represents a critical issue that transcends individual cases. It highlights the intersection of technology and crime, demonstrating how advancements in AI can be weaponized for malicious purposes. As these operations continue to grow, so do the risks to unsuspecting individuals who may fall victim to sophisticated scams.
Moreover, this trend raises pressing questions about the regulation of AI technology and the responsibilities of tech companies in preventing its misuse. As cybercriminals become increasingly adept at leveraging AI, the need for robust countermeasures and awareness campaigns becomes ever more urgent.
As we move forward, several key developments should be monitored: - Increased Law Enforcement Action: Governments and organizations are likely to ramp up efforts to combat these expansive scamming operations, especially as they become more visible to the public. - Technological Countermeasures: Advances in AI could also be used to develop tools that identify and counteract deepfake technology, helping to protect potential victims. - Public Awareness Campaigns: Ongoing education about the tactics used by scammers will be crucial in preventing individuals from falling prey to these schemes.
The evolving landscape of cybercrime underscores the importance of vigilance and proactive measures in safeguarding against deceptive practices that exploit technology. As AI continues to develop, both criminals and defenders will adapt, making it essential to stay informed and prepared for the challenges ahead.

A Canadian woman was killed in a shooting at Mexico's Teotihuacán, raising urgent safety concerns as the country prepares for the World Cup. What happens next?
BBC World
More than 200 civilians rescued from IS-linked ADF in DR Congo. Harrowing conditions revealed, highlighting urgency for military action.
BBC World
Japan issues urgent tsunami warnings after a massive 7.7 earthquake. With risks of stronger quakes looming, what should citizens be prepared for next?
BBC World