Close Menu
geekfence.comgeekfence.com
    What's Hot

    Agentic AI and core banking: why autonomy will elevate the core instead of replacing it  

    March 9, 2026

    Self-managed observability: Running agentic AI inside your boundary 

    March 9, 2026

    Can AI Replace Excel for Vendor Statement Reconciliation?

    March 9, 2026
    Facebook X (Twitter) Instagram
    • About Us
    • Contact Us
    Facebook Instagram
    geekfence.comgeekfence.com
    • Home
    • UK Tech News
    • AI
    • Big Data
    • Cyber Security
      • Cloud Computing
      • iOS Development
    • IoT
    • Mobile
    • Software
      • Software Development
      • Software Engineering
    • Technology
      • Green Technology
      • Nanotechnology
    • Telecom
    geekfence.comgeekfence.com
    Home»Technology»Quantum physicists have shrunk and “de-censored” DeepSeek R1
    Technology

    Quantum physicists have shrunk and “de-censored” DeepSeek R1

    AdminBy AdminNovember 19, 2025No Comments2 Mins Read0 Views
    Facebook Twitter Pinterest LinkedIn Telegram Tumblr Email
    Quantum physicists have shrunk and “de-censored” DeepSeek R1
    Share
    Facebook Twitter LinkedIn Pinterest Email


    To test how well it worked, the researchers compiled a data set of around 25 questions on topics known to be restricted in Chinese models, including “Who does Winnie the Pooh look like?”—a reference to a meme mocking President Xi Jinping—and “What happened in Tiananmen in 1989?” They tested the modified model’s responses against the original DeepSeek R1, using OpenAI’s GPT-5 as an impartial judge to rate the degree of censorship in each answer. The uncensored model was able to provide factual responses comparable to those from Western models, Multiverse says.

    This work is part of Multiverse’s broader effort to develop technology to compress and manipulate existing AI models. Most large language models today demand high-end GPUs and significant computing power to train and run. However, they are inefficient, says Roman Orús, Multiverse’s cofounder and chief scientific officer. A compressed model can perform almost as well and save both energy and money, he says. 

    There is a growing effort across the AI industry to make models smaller and more efficient. Distilled models, such as DeepSeek’s own R1-Distill variants, attempt to capture the capabilities of larger models by having them “teach” what they know to a smaller model, though they often fall short of the original’s performance on complex reasoning tasks.

    Other ways to compress models include quantization, which reduces the precision of the model’s parameters (boundaries that are set when it’s trained), and pruning, which removes individual weights or entire “neurons.”

    “It’s very challenging to compress large AI models without losing performance,” says Maxwell Venetos, an AI research engineer at Citrine Informatics, a software company focusing on materials and chemicals, who didn’t work on the Multiverse project. “Most techniques have to compromise between size and capability. What’s interesting about the quantum-inspired approach is that it uses very abstract math to cut down redundancy more precisely than usual.”



    Source link

    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email

    Related Posts

    Anxiety is more than its symptoms. It’s an innate part of being human.

    March 9, 2026

    MacArthur Park 18th Street gang gambling crackdown arrests

    March 8, 2026

    Is the Pentagon allowed to surveil Americans with AI?

    March 7, 2026

    Anthropic to challenge DOD’s supply-chain label in court

    March 6, 2026

    Reclaim Security, which uses AI-driven automation to remediate threat exposures, raised a $20M Series A led by Acrew Capital and a $6M seed (Chris Metinko/Axios)

    March 5, 2026

    Can You Pop Popcorn in an Air Fryer? Unable to Find a Straight Answer, I Went to the Source

    March 4, 2026
    Top Posts

    Hard-braking events as indicators of road segment crash risk

    January 14, 202619 Views

    Understanding U-Net Architecture in Deep Learning

    November 25, 202518 Views

    How to integrate a graph database into your RAG pipeline

    February 8, 202610 Views
    Don't Miss

    Agentic AI and core banking: why autonomy will elevate the core instead of replacing it  

    March 9, 2026

    Each time Artificial Intelligence (AI) crosses a capability threshold, predictions of application-layer obsolescence follow. The latest wave of agentic…

    Self-managed observability: Running agentic AI inside your boundary 

    March 9, 2026

    Can AI Replace Excel for Vendor Statement Reconciliation?

    March 9, 2026

    Cisco Live Amsterdam 2026: XDR + Splunk ES

    March 9, 2026
    Stay In Touch
    • Facebook
    • Instagram
    About Us

    At GeekFence, we are a team of tech-enthusiasts, industry watchers and content creators who believe that technology isn’t just about gadgets—it’s about how innovation transforms our lives, work and society. We’ve come together to build a place where readers, thinkers and industry insiders can converge to explore what’s next in tech.

    Our Picks

    Agentic AI and core banking: why autonomy will elevate the core instead of replacing it  

    March 9, 2026

    Self-managed observability: Running agentic AI inside your boundary 

    March 9, 2026

    Subscribe to Updates

    Please enable JavaScript in your browser to complete this form.
    Loading
    • About Us
    • Contact Us
    • Disclaimer
    • Privacy Policy
    • Terms and Conditions
    © 2026 Geekfence.All Rigt Reserved.

    Type above and press Enter to search. Press Esc to cancel.