Close Menu
Alan C. Moore
    What's Hot

    This $20 Million Study Told Democrats What Everyone Else Already Knew

    June 4, 2025

    Iran Rejects Nuclear Deal With U.S. But Leaves Door Open to a ‘Regional Consortium’ to Enrich Uranium

    June 4, 2025

    EXPOSED: Biden Weaponized Airport Security, Gave Senator’s Husband Preferential Treatment

    June 4, 2025
    Facebook X (Twitter) Instagram
    Trending
    • This $20 Million Study Told Democrats What Everyone Else Already Knew
    • Iran Rejects Nuclear Deal With U.S. But Leaves Door Open to a ‘Regional Consortium’ to Enrich Uranium
    • EXPOSED: Biden Weaponized Airport Security, Gave Senator’s Husband Preferential Treatment
    • The Outrage Machine vs. Immigration Law: MSNBC’s Latest Meltdown Over Trump
    • Florida Narrowly Dodges UF President Who Dedicated His Career To Illegal Bigotry
    • UK Media Are Very Mad At Darren Beattie For Dismantling A State Dept. Censorship Apparatus
    • Jeffrey Epstein’s hidden wealth revealed: Investment in Peter Thiel’s firm now nets millions for his estate
    • ​In Photos: Pride month kicks off June 2025 — Why pride parades matter to the LGBTQ+ community?​
    Alan C. MooreAlan C. Moore
    Subscribe
    Wednesday, June 4
    • Home
    • US News
    • Politics
    • Business & Economy
    • Video
    • About Alan
    • Newsletter Sign-up
    Alan C. Moore
    Home » Blog » An AI Image Generator’s Exposed Database Reveals What People Really Used It For

    An AI Image Generator’s Exposed Database Reveals What People Really Used It For

    March 31, 2025Updated:March 31, 2025 Tech No Comments
    Nudify App Database Security jpg
    Nudify App Database Security jpg
    Share
    Facebook Twitter LinkedIn Pinterest Email
    image
    According to fresh study released by WIRED, tens of thousands of obvious AI-generated photos, including AI-generated child sexual abuse fabric, were left open and accessible to everyone online. More than 95, 000 records were found in an empty database owned by an AI image-generation company, including some swift data and images of celebrities who had de-aged to appear like children, such as Beyoncé and Ariana Grande.

    The exposed collection is linked to South Korea-based GenNomis, which was discovered by security scholar Jeremiah Fowler, who disclosed details of the hole with WIRED. Numerous image-generation and robot tools were available for use on the web and its family business, AI-Nomis. More than 45 GB of data, primarily made up of AI graphics, were left in the wild.

    The data that is revealed reveals how AI image-generation tools can be used to produce potentially nonconsensual sexual content in adults and child sexual abuse ( CSAM ) that is deepfake-survivor-breeze-liu-microsoft/”>deeply harmful and dangerous. In recent years, tens of “deepfake” and “nudify” sites, machines, and applications have become popular, causing thousands of women and women to get targeted with vile images and videos. This has coincided with a rise in CSAM generated by AI.

    Palmer asserts that the biggest factor is how dangerous this is in relation to the data exposure. It’s terrifying when you look at it as a security scholar or as a family. And how simple it is to make that information is enthralling.

    Earlier in March, Fowler discovered the empty cache of files and immediately reported it to GenNomis and AI-Nomis, noting that it contained AI CSAM. The dataset was no password-protected or encrypted. Fowler claims that GenNomis swiftly closed the database, but it never responded or reached him regarding the findings.

    Many requests for comment from WIRED were not responded to by GenNomis or AI-Nomis. However, after WIRED reached out to the companies, both firms ‘ websites appeared to be offline, with the GenNomis site today returning a 404 error page.

    This instance also highlights how disturbingly lucrative a market for AI is, according to Clare McGlynn, a law professor at Durham University in the UK with a focus on net and image-based misuse. This should serve as a reminder that twisted people are not the only ones who have created, possess, and distributed CSAM.

    GenNomis listed several different AI resources on its pages before it was wiped. These included an image machine that allowed users to upload pictures and include prompts to modify them, as well as enter the images they wanted to create. A video-to-image converter was also available, along with a history cleanser and a face-swapping device.

    The most unsettling was obviously seeing children who were evidently artists who had been reimagined as children, Fowler says. According to the researcher, there were also AI-generated photographs of young females in full-body. He claims that in those situations, it’s unclear whether the heads being used are entirely AI-generated or created from scratch.

    According to Fowler, there were AI-generated pornographic pictures of people in the database as well as potential “face-swap” pictures. He says he looked through the records and found what appeared to be photos of actual people, which were most likely used to produce “explicit skinny or physical AI-generated images.” He claims that some generated graphics were” so they were taking genuine pictures of people and swapping their eyes on there.”

    The GenNomis site allowed obvious AI adult imagery when it was life. Some of the images on its homepage, along with an AI “models” section that featured sexualized images of women, were “photorealistic,” while others were entirely AI-generated or in lively or animated images. Additionally, there was a “marketplace” and” NSFW” gallery where people could exchange images and possibly sell AI-generated photo albums. A previous edition of the website from 2024 said “uncensored graphics” could be created, whereas the website’s motto stated that users may “generate unrestrained” images and videos.

    According to GenNomis ‘ user policies, only “respectful content” is permitted, as are “explicit violence” and hate speech. According to its area guidelines,” Child sex and any other illegal actions are strictly prohibited on GenNomis,” where accounts posting prohibited articles may be deleted. Over the past ten years, CSAM has largely replaced the term” child pornography” in favor of researchers, victims activists, journalists, tech firms, and others.

    GenNomis ‘ use of any restraint tools or techniques to stop or stop the creation of AI-generated CSAM is not known. Some customers complained on the” community” page of the company last year that their causes for non-sexual “dark humor” were blocked and that they could hardly generate images of people having sex. Another user claimed that the” NSFW” information should be addressed because it “might be looked at by the authorities” on another accounts posted on the area website.

    Fowler claims that the database shows that they are not taking all the necessary steps to stop that content if I was able to observe those pictures with nothing more than the URL.

    According to Henry Ajder, a deep-fake expert and founder of the consulting firm Latent Space Advisory, the website’s branding, which refers to “unrestricted” image creation and a” NSFW” section, suggests there may be a” clear association with intimate content without safety measures,” even if the company did not permit the creation of harmful and illegal content.

    Ajder claims he is surprised that a South Korean-based site was linked to it. Before taking steps to stop the wave of photoshopped abuse, the nation was plagued by a non-consensual algorithmic “emergency” that targeted girls last year. According to Ajder, more stress must be placed on all areas of the habitat to create artificially generated imagery. The more of this is revealed, the more it imposes the issue on lawmakers, digital platforms, web hosting providers, and payment processors. All of the folks who, in some way or another, deliberately or otherwise, “are supporting and enabling this to happen,” he claims, are in some way or another.

    According to Fowler, the database likewise discovered documents that appeared to have AI prompts. According to the researcher, no consumer data, such as usernames and passwords, was incorporated into the exposed data. Pictures of the causes show the use of phrases like “tiny,” “girl,” and recommendations to sexual functions between family members. Additionally, the inspires included sexual acts committed by celebrities.

    According to Fowler,” It seems to me that the tech has jumped ahead of any of the regulations or regulates.” We all know that baby explicit pictures are prohibited, but that doesn’t prevent the technology from being able to produce those pictures, according to the law.

    There has been a huge increase in AI-generated CSAM as conceptual Artificial systems have significantly improved their ease of creation and modification over the past two years. According to Derek Ray-Hill, the interim CEO of the Internet Watch Foundation ( IWF), a UK-based nonprofit that addresses online CSAM, “webpages containing AI-generated child sexual abuse abuse material have more than quadrupled since 2023, and the photorealism of this horrific content has also soared in sophistication.”

    The IWF has documented how thieves are developing the techniques for creating AI-generated CSAM and increasing their use of it. Scammers now find it to be” just too easy” to use AI to produce and distribute sexually explicit content for children at a size and speed, according to Ray-Hill.

    Source credit

    Keep Reading

    Perplexity’s CEO Sees AI Agents as the Next Web Battleground

    Perplexity’s CEO Sees AI Agents as the Next Web Battleground

    Perplexity’s CEO Sees AI Agents as the Next Web Battleground

    Survey: Almost 80% of IT Leaders Saw Negative Company Outcomes Due to AI

    Survey: Almost 80% of IT Leaders Saw Negative Company Outcomes Due to AI

    Survey: Almost 80% of IT Leaders Saw Negative Company Outcomes Due to AI

    Editors Picks

    This $20 Million Study Told Democrats What Everyone Else Already Knew

    June 4, 2025

    Iran Rejects Nuclear Deal With U.S. But Leaves Door Open to a ‘Regional Consortium’ to Enrich Uranium

    June 4, 2025

    EXPOSED: Biden Weaponized Airport Security, Gave Senator’s Husband Preferential Treatment

    June 4, 2025

    The Outrage Machine vs. Immigration Law: MSNBC’s Latest Meltdown Over Trump

    June 4, 2025

    Florida Narrowly Dodges UF President Who Dedicated His Career To Illegal Bigotry

    June 4, 2025

    UK Media Are Very Mad At Darren Beattie For Dismantling A State Dept. Censorship Apparatus

    June 4, 2025

    Jeffrey Epstein’s hidden wealth revealed: Investment in Peter Thiel’s firm now nets millions for his estate

    June 4, 2025

    ​In Photos: Pride month kicks off June 2025 — Why pride parades matter to the LGBTQ+ community?​

    June 4, 2025

    ‘Russia will respond to Ukraine attack’: Donald Trump, Putin talk over phone; Iran’s nuclear deal also discussed

    June 4, 2025

    House launches inquiry into immigration history of Boulder terrorism suspect Mohamed Sabry Soliman

    June 4, 2025
    • Home
    • US News
    • Politics
    • Business & Economy
    • About Alan
    • Contact

    Sign up for the Conservative Insider Newsletter.

    Get the latest conservative news from alancmoore.com [aweber listid="5891409" formid="902172699" formtype="webform"]
    Facebook X (Twitter) YouTube Instagram TikTok
    © 2025 alancmoore.com
    • Privacy Policy
    • Terms
    • Accessibility

    Type above and press Enter to search. Press Esc to cancel.