Close Menu
geekfence.comgeekfence.com
    What's Hot

    What Productivity Really Means – O’Reilly

    November 12, 2025

    The EU’s AI Act

    November 12, 2025

    The economics of the software development business

    November 12, 2025
    Facebook X (Twitter) Instagram
    • About Us
    • Contact Us
    Facebook Instagram
    geekfence.comgeekfence.com
    • Home
    • UK Tech News
    • AI
    • Big Data
    • Cyber Security
      • Cloud Computing
      • iOS Development
    • IoT
    • Mobile
    • Software
      • Software Development
      • Software Engineering
    • Technology
      • Green Technology
      • Nanotechnology
    • Telecom
    geekfence.comgeekfence.com
    Home»Cyber Security»Microsoft Uncovers ‘Whisper Leak’ Attack That Identifies AI Chat Topics in Encrypted Traffic
    Cyber Security

    Microsoft Uncovers ‘Whisper Leak’ Attack That Identifies AI Chat Topics in Encrypted Traffic

    AdminBy AdminNovember 9, 2025No Comments5 Mins Read3 Views
    Facebook Twitter Pinterest LinkedIn Telegram Tumblr Email
    Microsoft Uncovers ‘Whisper Leak’ Attack That Identifies AI Chat Topics in Encrypted Traffic
    Share
    Facebook Twitter LinkedIn Pinterest Email


    Microsoft Uncovers ‘Whisper Leak’ Attack That Identifies AI Chat Topics in Encrypted Traffic

    Microsoft has disclosed details of a novel side-channel attack targeting remote language models that could enable a passive adversary with capabilities to observe network traffic to glean details about model conversation topics despite encryption protections under certain circumstances.

    This leakage of data exchanged between humans and streaming-mode language models could pose serious risks to the privacy of user and enterprise communications, the company noted. The attack has been codenamed Whisper Leak.

    “Cyber attackers in a position to observe the encrypted traffic (for example, a nation-state actor at the internet service provider layer, someone on the local network, or someone connected to the same Wi-Fi router) could use this cyber attack to infer if the user’s prompt is on a specific topic,” security researchers Jonathan Bar Or and Geoff McDonald, along with the Microsoft Defender Security Research Team, said.

    Put differently, the attack allows an attacker to observe encrypted TLS traffic between a user and LLM service, extract packet size and timing sequences, and use trained classifiers to infer whether the conversation topic matches a sensitive target category.

    Model streaming in large language models (LLMs) is a technique that allows for incremental data reception as the model generates responses, instead of having to wait for the entire output to be computed. It’s a critical feedback mechanism as certain responses can take time, depending on the complexity of the prompt or task.

    DFIR Retainer Services

    The latest technique demonstrated by Microsoft is significant, not least because it works despite the fact that the communications with artificial intelligence (AI) chatbots are encrypted with HTTPS, which ensures that the contents of the exchange stay secure and cannot be tampered with.

    Many a side-channel attack has been devised against LLMs in recent years, including the ability to infer the length of individual plaintext tokens from the size of encrypted packets in streaming model responses or by exploiting timing differences caused by caching LLM inferences to execute input theft (aka InputSnatch).

    Whisper Leak builds upon these findings to explore the possibility that “the sequence of encrypted packet sizes and inter-arrival times during a streaming language model response contains enough information to classify the topic of the initial prompt, even in the cases where responses are streamed in groupings of tokens,” per Microsoft.

    To test this hypothesis, the Windows maker said it trained a binary classifier as a proof-of-concept that’s capable of differentiating between a specific topic prompt and the rest (i.e., noise) using three different machine learning models: LightGBM, Bi-LSTM, and BERT.

    The result is that many models from Mistral, xAI, DeepSeek, and OpenAI have been found to achieve scores above 98%, thereby making it possible for an attacker monitoring random conversations with the chatbots to reliably flag that specific topic.

    “If a government agency or internet service provider were monitoring traffic to a popular AI chatbot, they could reliably identify users asking questions about specific sensitive topics – whether that’s money laundering, political dissent, or other monitored subjects – even though all the traffic is encrypted,” Microsoft said.

    Whisper Leak attack pipeline

    To make matters worse, the researchers found that the effectiveness of Whisper Leak can improve as the attacker collects more training samples over time, turning it into a practical threat. Following responsible disclosure, OpenAI, Mistral, Microsoft, and xAI have all deployed mitigations to counter the risk.

    “Combined with more sophisticated attack models and the richer patterns available in multi-turn conversations or multiple conversations from the same user, this means a cyberattacker with patience and resources could achieve higher success rates than our initial results suggest,” it added.

    One effective countermeasure devised by OpenAI, Microsoft, and Mistral involves adding a “random sequence of text of variable length” to each response, which, in turn, masks the length of each token to render the side-channel moot.

    CIS Build Kits

    Microsoft is also recommending that users concerned about their privacy when talking to AI providers can avoid discussing highly sensitive topics when using untrusted networks, utilize a VPN for an extra layer of protection, use non-streaming models of LLMs, and switch to providers that have implemented mitigations.

    The disclosure comes as a new evaluation of eight open-weight LLMs from Alibaba (Qwen3-32B), DeepSeek (v3.1), Google (Gemma 3-1B-IT), Meta (Llama 3.3-70B-Instruct), Microsoft (Phi-4), Mistral (Large-2 aka Large-Instruct-2047), OpenAI (GPT-OSS-20b), and Zhipu AI (GLM 4.5-Air) has found them to be highly susceptible to adversarial manipulation, specifically when it comes to multi-turn attacks.

    Comparative vulnerability analysis showing attack success rates across tested models for both single-turn and multi-turn scenarios

    “These results underscore a systemic inability of current open-weight models to maintain safety guardrails across extended interactions,” Cisco AI Defense researchers Amy Chang, Nicholas Conley, Harish Santhanalakshmi Ganesan, and Adam Swanda said in an accompanying paper.

    “We assess that alignment strategies and lab priorities significantly influence resilience: capability-focused models such as Llama 3.3 and Qwen 3 demonstrate higher multi-turn susceptibility, whereas safety-oriented designs such as Google Gemma 3 exhibit more balanced performance.”

    These discoveries show that organizations adopting open-source models can face operational risks in the absence of additional security guardrails, adding to a growing body of research exposing fundamental security weaknesses in LLMs and AI chatbots ever since OpenAI ChatGPT’s public debut in November 2022.

    This makes it crucial that developers enforce adequate security controls when integrating such capabilities into their workflows, fine-tune open-weight models to be more robust to jailbreaks and other attacks, conduct periodic AI red-teaming assessments, and implement strict system prompts that are aligned with defined use cases.



    Source link

    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email

    Related Posts

    Sophos Firewall v22 security enhancements – Sophos News

    November 12, 2025

    Accelerating adoption of AI for cybersecurity at DEF CON 33

    November 11, 2025

    How to use the new Windows 11 Start menu, now rolling out

    November 10, 2025

    Cloudflare Scrubs Aisuru Botnet from Top Domains List – Krebs on Security

    November 8, 2025

    The WhatsApp screen-sharing scam you didn’t see coming

    November 7, 2025

    Kickoffs and Rip-offs—Watch Out for Online Betting Scams This Football Season

    November 6, 2025
    Top Posts

    Microsoft 365 Copilot now enables you to build apps and workflows

    October 29, 20256 Views

    Here’s the latest company planning for gene-edited babies

    November 2, 20254 Views

    Skills, Roles & Career Guide

    November 4, 20252 Views
    Don't Miss

    What Productivity Really Means – O’Reilly

    November 12, 2025

    We’ve been bombarded with claims about how much generative AI improves software developer productivity: It…

    The EU’s AI Act

    November 12, 2025

    The economics of the software development business

    November 12, 2025

    Sophos Firewall v22 security enhancements – Sophos News

    November 12, 2025
    Stay In Touch
    • Facebook
    • Instagram
    About Us

    At GeekFence, we are a team of tech-enthusiasts, industry watchers and content creators who believe that technology isn’t just about gadgets—it’s about how innovation transforms our lives, work and society. We’ve come together to build a place where readers, thinkers and industry insiders can converge to explore what’s next in tech.

    Our Picks

    What Productivity Really Means – O’Reilly

    November 12, 2025

    The EU’s AI Act

    November 12, 2025

    Subscribe to Updates

    Please enable JavaScript in your browser to complete this form.
    Loading
    • About Us
    • Contact Us
    • Disclaimer
    • Privacy Policy
    • Terms and Conditions
    © 2025 Geekfence.All Rigt Reserved.

    Type above and press Enter to search. Press Esc to cancel.