
The exposed collection is linked to South Korea-based GenNomis, which was discovered by security scholar Jeremiah Fowler, who disclosed details of the hole with WIRED. Numerous image-generation and robot tools were available for use on the web and its family business, AI-Nomis. More than 45 GB of data, primarily made up of AI graphics, were left in the wild.
The data that is revealed reveals how AI image-generation tools can be used to produce potentially nonconsensual sexual content in adults and child sexual abuse ( CSAM ) that is deepfake-survivor-breeze-liu-microsoft/”>deeply harmful and dangerous. In recent years, tens of “deepfake” and “nudify” sites, machines, and applications have become popular, causing thousands of women and women to get targeted with vile images and videos. This has coincided with a rise in CSAM generated by AI.
Palmer asserts that the biggest factor is how dangerous this is in relation to the data exposure. It’s terrifying when you look at it as a security scholar or as a family. And how simple it is to make that information is enthralling.
Earlier in March, Fowler discovered the empty cache of files and immediately reported it to GenNomis and AI-Nomis, noting that it contained AI CSAM. The dataset was no password-protected or encrypted. Fowler claims that GenNomis swiftly closed the database, but it never responded or reached him regarding the findings.
Many requests for comment from WIRED were not responded to by GenNomis or AI-Nomis. However, after WIRED reached out to the companies, both firms ‘ websites appeared to be offline, with the GenNomis site today returning a 404 error page.
This instance also highlights how disturbingly lucrative a market for AI is, according to Clare McGlynn, a law professor at Durham University in the UK with a focus on net and image-based misuse. This should serve as a reminder that twisted people are not the only ones who have created, possess, and distributed CSAM.
GenNomis listed several different AI resources on its pages before it was wiped. These included an image machine that allowed users to upload pictures and include prompts to modify them, as well as enter the images they wanted to create. A video-to-image converter was also available, along with a history cleanser and a face-swapping device.
The most unsettling was obviously seeing children who were evidently artists who had been reimagined as children, Fowler says. According to the researcher, there were also AI-generated photographs of young females in full-body. He claims that in those situations, it’s unclear whether the heads being used are entirely AI-generated or created from scratch.
According to Fowler, there were AI-generated pornographic pictures of people in the database as well as potential “face-swap” pictures. He says he looked through the records and found what appeared to be photos of actual people, which were most likely used to produce “explicit skinny or physical AI-generated images.” He claims that some generated graphics were” so they were taking genuine pictures of people and swapping their eyes on there.”
The GenNomis site allowed obvious AI adult imagery when it was life. Some of the images on its homepage, along with an AI “models” section that featured sexualized images of women, were “photorealistic,” while others were entirely AI-generated or in lively or animated images. Additionally, there was a “marketplace” and” NSFW” gallery where people could exchange images and possibly sell AI-generated photo albums. A previous edition of the website from 2024 said “uncensored graphics” could be created, whereas the website’s motto stated that users may “generate unrestrained” images and videos.
According to GenNomis ‘ user policies, only “respectful content” is permitted, as are “explicit violence” and hate speech. According to its area guidelines,” Child sex and any other illegal actions are strictly prohibited on GenNomis,” where accounts posting prohibited articles may be deleted. Over the past ten years, CSAM has largely replaced the term” child pornography” in favor of researchers, victims activists, journalists, tech firms, and others.
GenNomis ‘ use of any restraint tools or techniques to stop or stop the creation of AI-generated CSAM is not known. Some customers complained on the” community” page of the company last year that their causes for non-sexual “dark humor” were blocked and that they could hardly generate images of people having sex. Another user claimed that the” NSFW” information should be addressed because it “might be looked at by the authorities” on another accounts posted on the area website.
Fowler claims that the database shows that they are not taking all the necessary steps to stop that content if I was able to observe those pictures with nothing more than the URL.
According to Henry Ajder, a deep-fake expert and founder of the consulting firm Latent Space Advisory, the website’s branding, which refers to “unrestricted” image creation and a” NSFW” section, suggests there may be a” clear association with intimate content without safety measures,” even if the company did not permit the creation of harmful and illegal content.
Ajder claims he is surprised that a South Korean-based site was linked to it. Before taking steps to stop the wave of photoshopped abuse, the nation was plagued by a non-consensual algorithmic “emergency” that targeted girls last year. According to Ajder, more stress must be placed on all areas of the habitat to create artificially generated imagery. The more of this is revealed, the more it imposes the issue on lawmakers, digital platforms, web hosting providers, and payment processors. All of the folks who, in some way or another, deliberately or otherwise, “are supporting and enabling this to happen,” he claims, are in some way or another.
According to Fowler, the database likewise discovered documents that appeared to have AI prompts. According to the researcher, no consumer data, such as usernames and passwords, was incorporated into the exposed data. Pictures of the causes show the use of phrases like “tiny,” “girl,” and recommendations to sexual functions between family members. Additionally, the inspires included sexual acts committed by celebrities.
According to Fowler,” It seems to me that the tech has jumped ahead of any of the regulations or regulates.” We all know that baby explicit pictures are prohibited, but that doesn’t prevent the technology from being able to produce those pictures, according to the law.
There has been a huge increase in AI-generated CSAM as conceptual Artificial systems have significantly improved their ease of creation and modification over the past two years. According to Derek Ray-Hill, the interim CEO of the Internet Watch Foundation ( IWF), a UK-based nonprofit that addresses online CSAM, “webpages containing AI-generated child sexual abuse abuse material have more than quadrupled since 2023, and the photorealism of this horrific content has also soared in sophistication.”
The IWF has documented how thieves are developing the techniques for creating AI-generated CSAM and increasing their use of it. Scammers now find it to be” just too easy” to use AI to produce and distribute sexually explicit content for children at a size and speed, according to Ray-Hill.