Close Menu
geekfence.comgeekfence.com
    What's Hot

    Open Cosmos launches first satellites for new LEO constellation

    January 25, 2026

    Achieving superior intent extraction through decomposition

    January 25, 2026

    How UX Research Reveals Hidden AI Orchestration Failures

    January 25, 2026
    Facebook X (Twitter) Instagram
    • About Us
    • Contact Us
    Facebook Instagram
    geekfence.comgeekfence.com
    • Home
    • UK Tech News
    • AI
    • Big Data
    • Cyber Security
      • Cloud Computing
      • iOS Development
    • IoT
    • Mobile
    • Software
      • Software Development
      • Software Engineering
    • Technology
      • Green Technology
      • Nanotechnology
    • Telecom
    geekfence.comgeekfence.com
    Home»Mobile»Countries Block Grok Amid Backlash Over AI-Altered ‘Undressed’ Images
    Mobile

    Countries Block Grok Amid Backlash Over AI-Altered ‘Undressed’ Images

    AdminBy AdminJanuary 13, 2026No Comments8 Mins Read1 Views
    Facebook Twitter Pinterest LinkedIn Telegram Tumblr Email
    Countries Block Grok Amid Backlash Over AI-Altered ‘Undressed’ Images
    Share
    Facebook Twitter LinkedIn Pinterest Email


    The backlash against Grok is growing as two countries became the first to block the AI chatbot developed by Elon Musk’s artificial intelligence company, xAI, after it was discovered creating sexualized images of women and children upon request. Indonesia and Malaysia implemented the blocks over the weekend in the wake of a disturbing post on New Year’s Eve from the Grok AI account on Musk’s X social media platform.   

    “Dear Community,” began the Dec. 31 post. “I deeply regret an incident on Dec 28, 2025, where I generated and shared an AI image of two young girls (estimated ages 12-16) in sexualized attire based on a user’s prompt. This violated ethical standards and potentially US laws on CSAM. It was a failure in safeguards, and I’m sorry for any harm caused. xAI is reviewing to prevent future issues. Sincerely, Grok.”

    The two young girls weren’t an isolated case. Kate Middleton, the Princess of Wales, was the target of similar AI image-editing requests, as was an underage actress in the final season of Stranger Things. The “undressing” edits have swept across an unsettling number of photos of women and children.

    Despite the Grok response’s promise of intervention, the problem hasn’t gone away. Just the opposite: Two weeks after that post, the number of images sexualized without consent has surged, along with calls for Elon Musk’s companies to rein in the behavior and for governments to take action.


    Don’t miss any of our unbiased tech content and lab-based reviews. Add CNET as a preferred Google source.


    According to data from independent researcher Genevieve Oh cited by Bloomberg, during one 24-hour period in early January, the @Grok account generated about 6,700 sexually suggestive or “nudifying” images every hour. That compares with an average of only 79 such images for the top five deepfake websites combined.

    Grok’s Dec. 31 post was in response to a user prompt that sought a contrite tone from the chatbot: “Write a heartfelt apology note that explains what happened to anyone lacking context.” Chatbots work from a base of training material, but individual posts can be variable.

    xAI did not respond to requests for comment.  

    Edits now limited to subscribers

    Late Thursday, a post from the Grok AI account noted a change in access to the image generation and editing feature. Instead of being open to all, free of charge, it would be limited to paying subscribers. 

    Critics said that’s not a credible response.

    “I don’t see this as a victory, because what we really needed was X to take the responsible steps of putting in place the guardrails to ensure that the AI tool couldn’t be used to generate abusive images,” Clare McGlynn, a law professor at the UK’s University of Durham, told the Washington Post.

    AI Atlas

    CNET

    What’s stirring the outrage isn’t just the volume of these images and the ease of generating them — the edits are also being done without the consent of the people in the images. 

    These altered images are the latest twist in one of the most disturbing aspects of generative AI, realistic but fake videos and photos. Software programs such as OpenAI’s Sora, Google’s Nano Banana and xAI’s Grok have put powerful creative tools within easy reach of everyone, and all that’s needed to produce explicit, nonconsensual images is a simple text prompt. 

    Grok users can upload a photo, which doesn’t have to be original to them, and ask Grok to alter it. Many of the altered images involved users asking Grok to put a person in a bikini, sometimes revising the request to be even more explicit, such as asking for the bikini to become smaller or more transparent.

    Governments and advocacy groups have been speaking out about Grok’s image edits. On Monday, UK internet regulator Ofcom said it has opened an investigation into X based on the reports that the AI chatbot is being used “to create and share undressed images of people — which may amount to intimate image abuse or pornography — and sexualised images of children that may amount to child sexual abuse material (CSAM).”

    The European Commission has also said it was looking into the matter, as have authorities in France, Malaysia and India.

    On Friday, US senators Ron Wyden, Ben Ray Luján and Edward Markey posted an open letter to the CEOs of Apple and Google, asking them to remove both X and Grok from their app stores in response to “X’s egregious behavior” and “Grok’s sickening content generation.”

    In the US, the Take It Down Act, signed into law last year, seeks to hold online platforms accountable for manipulated sexual imagery, but it gives those platforms until May of this year to set up the process for removing such images. 

    “Although these images are fake, the harm is incredibly real,” Natalie Grace Brigham, a Ph.D. student at the University of Washington who studies sociotechnical harms, told CNET. She notes that those whose images are altered in sexual ways can face “psychological, somatic and social harm, often with little legal recourse.”

    How Grok lets users get risque images

    Grok debuted in 2023 as Musk’s more freewheeling alternative to ChatGPT, Gemini and other chatbots. That’s resulted in disturbing news  — for instance, in July, when the chatbot praised Adolf Hitler and suggested that people with Jewish surnames were more likely to spread online hate. 

    In December, xAI introduced an image-editing feature that enables users to request specific edits to a photo. That’s what kicked off the recent spate of sexualized images, of both adults and minors. In one request that CNET has seen, a user responding to a photo of a young woman asked Grok to “change her to a dental floss bikini.”

    Grok also has a video generator that includes a “spicy mode” opt-in option for adults 18 and above, which will show users not-safe-for-work content. Users must include the phrase “generate a spicy video of Two Southeast Asian countries have blocked the AI chatbot after it was caught creating sexualized images of women and children upon request.” to activate the mode.

    A central concern about the Grok tools is whether they enable the creation of child sexual abuse material, or CSAM. On Dec. 31, a post from the Grok X account said that images depicting minors in minimal clothing were “isolated cases” and that “improvements are ongoing to block such requests entirely.”

    In response to a post by Woow Social suggesting that Grok simply “stop allowing user-uploaded images to be altered,” the Grok account replied that xAI was “evaluating features like image alteration to curb nonconsensual harm,” but did not say that the change would be made. 

    According to NBC News, some sexualized images created since December have been removed, and some of the accounts that requested them have been suspended.

    Conservative influencer and author Ashley St. Clair, mother to one of Musk’s 14 children, told NBC News this week that Grok has created numerous sexualized images of her, including some using images from when she was a minor. St. Clair told NBC News that Grok agreed to stop doing so when she asked, but that it did not.

    “xAI is purposefully and recklessly endangering people on their platform and hoping to avoid accountability just because it’s ‘AI,'” Ben Winters, director of AI and data privacy for nonprofit Consumer Federation of America, said in a statement last week. “AI is no different than any other product — the company has chosen to break the law and must be held accountable.”

    What the experts say

    The source materials for these explicit, nonconsensual image edits of people’s photos of themselves or their children are all too easy for bad actors to access. But protecting yourself from such edits is not as simple as never posting photographs, Brigham, the researcher into sociotechnical harms, says.

    “The unfortunate reality is that even if you don’t post images online, other public images of you could theoretically be used in abuse,” she said. 

    And while not posting photos online is one preventive step that people can take, doing so “risks reinforcing a culture of victim-blaming,” Brigham said. “Instead, we should focus on protecting people from abuse by building better platforms and holding X accountable.”

    Sourojit Ghosh, a sixth-year Ph.D. candidate at the University of Washington, researches how generative AI tools can cause harm and mentors future AI professionals in designing and advocating for safer AI solutions. 

    Ghosh says it’s possible to build safeguards into artificial intelligence. In 2023, he was one of the researchers looking into the sexualization capabilities of AI. He notes that the AI image generation tool Stable Diffusion had a built-in not-safe-for-work threshold. A prompt that violated the rules would trigger a black box to appear over a questionable part of the image, although it didn’t always work perfectly.

    “The point I’m trying to make is that there are safeguards that are in place in other models,” Ghosh told CNET.

    He also notes that if users of ChatGPT or Gemini AI models use certain words, the chatbots will tell the user that they are banned from responding to those words.

    “All this is to say, there is a way to very quickly shut this down,” Ghosh said.





    Source link

    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email

    Related Posts

    If there’s one thing I want from my next phone it’s less AI in my camera

    January 25, 2026

    Apple fights India antitrust authority over access to global financials

    January 24, 2026

    Leak about the Galaxy S26 Ultra’s display says you’ll forget screen protectors existed

    January 23, 2026

    Apple’s next wearable tipped to be an AI pin with cameras

    January 22, 2026

    Language learning marketplace Preply’s unicorn status embodies Ukrainian resilience

    January 21, 2026

    No Code, All Vibes: 6 Vibe Coding Tips I Learned From Building Apps With Just Words

    January 20, 2026
    Top Posts

    Understanding U-Net Architecture in Deep Learning

    November 25, 202511 Views

    Hard-braking events as indicators of road segment crash risk

    January 14, 20269 Views

    Microsoft 365 Copilot now enables you to build apps and workflows

    October 29, 20258 Views
    Don't Miss

    Open Cosmos launches first satellites for new LEO constellation

    January 25, 2026

    Press Release Open Cosmos, the company building satellites to understand and connect the world, has…

    Achieving superior intent extraction through decomposition

    January 25, 2026

    How UX Research Reveals Hidden AI Orchestration Failures

    January 25, 2026

    ByteDance steps up its push into enterprise cloud services

    January 25, 2026
    Stay In Touch
    • Facebook
    • Instagram
    About Us

    At GeekFence, we are a team of tech-enthusiasts, industry watchers and content creators who believe that technology isn’t just about gadgets—it’s about how innovation transforms our lives, work and society. We’ve come together to build a place where readers, thinkers and industry insiders can converge to explore what’s next in tech.

    Our Picks

    Open Cosmos launches first satellites for new LEO constellation

    January 25, 2026

    Achieving superior intent extraction through decomposition

    January 25, 2026

    Subscribe to Updates

    Please enable JavaScript in your browser to complete this form.
    Loading
    • About Us
    • Contact Us
    • Disclaimer
    • Privacy Policy
    • Terms and Conditions
    © 2026 Geekfence.All Rigt Reserved.

    Type above and press Enter to search. Press Esc to cancel.