{"id":24979,"date":"2025-12-04T14:59:45","date_gmt":"2025-12-04T10:59:45","guid":{"rendered":"https:\/\/me-en.kaspersky.com\/blog\/?p=24979"},"modified":"2025-12-04T15:00:06","modified_gmt":"2025-12-04T11:00:06","slug":"chatbot-eavesdropping-whisper-leak-protection","status":"publish","type":"post","link":"https:\/\/me-en.kaspersky.com\/blog\/chatbot-eavesdropping-whisper-leak-protection\/24979\/","title":{"rendered":"How to eavesdrop on a neural network"},"content":{"rendered":"<p>People entrust neural networks with their most important, even intimate, matters: verifying medical diagnoses, seeking love advice, or turning to AI <a href=\"https:\/\/edition.cnn.com\/2024\/12\/18\/health\/chatbot-ai-therapy-risks-wellness\/\" target=\"_blank\" rel=\"noopener nofollow\">instead of a psychotherapist<\/a>. There are already known cases of <a href=\"https:\/\/www.cnn.com\/2025\/11\/06\/us\/openai-chatgpt-suicide-lawsuit-invs-vis\" target=\"_blank\" rel=\"noopener nofollow\">suicide planning<\/a>, <a href=\"https:\/\/abcnews.go.com\/US\/las-vegas-cybertruck-explosion-suspect-chatgpt-plan-attack\/story?id=117428523\" target=\"_blank\" rel=\"noopener nofollow\">real-world attacks<\/a>, and other dangerous acts facilitated by LLMs.\u00a0 Consequently, private chats between humans and AI are drawing increasing attention from governments, corporations, and curious individuals.<\/p>\n<p>So, there won\u2019t be a shortage of people willing to implement the Whisper Leak attack in the wild. After all, it allows determining the general topic of a conversation with a neural network without interfering with the traffic in any way \u2014 simply by analyzing the timing patterns of sending and receiving encrypted data packets over the network to the AI server. However, you can still keep your chats private; more on this below\u2026<\/p>\n<h2>How the Whisper Leak attack works<\/h2>\n<p>All language models generate their output progressively. To the user, this appears as if a person on the other end is typing word by word. In reality, however, language models operate not with individual characters or words, but with tokens \u2014 a kind of semantic unit for LLMs, and the AI response appears on screen as these tokens are generated. This output mode is known as \u201cstreaming\u201d, and it turns out you can infer the topic of the conversation by measuring the stream\u2019s characteristics. We\u2019ve previously <a href=\"https:\/\/www.kaspersky.com\/blog\/ai-chatbot-side-channel-attack\/51064\/\" target=\"_blank\" rel=\"noopener nofollow\">covered a research effort<\/a> that managed to fairly accurately reconstruct the text of a chat with a bot by analyzing the length of each token it sent.<\/p>\n<p>Researchers at Microsoft took this further by <a href=\"https:\/\/www.microsoft.com\/en-us\/security\/blog\/2025\/11\/07\/whisper-leak-a-novel-side-channel-cyberattack-on-remote-language-models\/\" target=\"_blank\" rel=\"noopener nofollow\">analyzing the response characteristics<\/a> from 30 different AI models to 11,800 prompts. A hundred prompts were used: variations on the question, \u201cIs money laundering legal?\u201d, while the rest were random and covering entirely different topics.<\/p>\n<p>By comparing the server response delay, packet size, and total packet count, the researchers were able to very accurately separate \u201cdangerous\u201d queries from \u201cnormal\u201d ones. They also used neural networks for the analysis \u2014 though not LLMs. Depending on the model being studied, the accuracy of identifying \u201cdangerous\u201d topics ranged from 71% to 100%, with accuracy exceeding 97% for 19 out of the 30 models.<\/p>\n<p>The researchers then conducted a more complex and realistic experiment. They tested a dataset of 10,000 random conversations, where only one focused on the chosen topic.<\/p>\n<p>The results were more varied, but the simulated attack still proved quite successful. For models such as Deepseek-r1, Groq-llama-4, gpt-4o-mini, xai-grok-2 and -3, as well as Mistral-small and Mistral-large, researchers were able to detect the signal in the noise in 50% of their experiments with zero false-positives.<\/p>\n<p>For Alibaba-Qwen2.5, Lambda-llama-3.1, gpt-4.1, gpt-o1-mini, Groq-llama-4, and Deepseek-v3-chat, the detection success rate dropped to 20% \u2014 though still without false positives. Meanwhile, for Gemini 2.5 pro, Anthropic-Claude-3-haiku, and gpt-4o-mini, the detection of \u201cdangerous\u201d chats on Microsoft\u2019s servers was only successful in 5% of cases. The success rate for other tested models was even lower.<\/p>\n<p>A key point to consider is that the results depend not only on the specific AI model, but also on the server configuration on which it\u2019s running. Therefore, the same OpenAI model might show different results in Microsoft\u2019s infrastructure versus OpenAI\u2019s own servers. The same holds true for all open-source models.<\/p>\n<h2>Practical implications: what does it take for Whisper Leak to work?<\/h2>\n<p>If a well-resourced attacker has access to their victims\u2019 network traffic \u2014 for instance, by controlling a router at an ISP or within an organization \u2014 they can detect a significant percentage of conversations on topics of interest simply by measuring traffic sent to the AI assistant servers, all while maintaining a very low error rate. However, this does not equate to automatic detection of any possible conversation topic. The attacker must first train their detection systems on specific themes \u2014 the model will only identify those.<\/p>\n<p>This threat cannot be dismissed as purely theoretical. Law enforcement agencies could, for example, monitor queries related to weapons or drug manufacturing, while companies might track employees\u2019 job search queries. However, using this technology to conduct mass surveillance across hundreds or thousands of topics isn\u2019t feasible \u2014 it\u2019s just too resource-intensive.<\/p>\n<p>In response to the research, some popular AI services have altered their server algorithms to make this attack more difficult to execute.<\/p>\n<h2>How to protect yourself from Whisper Leak<\/h2>\n<p>The primary responsibility for defense against this attack lies with the providers of AI models. They need to deliver generated text in a way that prevents the topic from being discerned from the token generation patterns. Following Microsoft\u2019s research, companies including OpenAI, Mistral, Microsoft Azure, and xAI reported that they were addressing the threat. They now add a small amount of invisible padding to the packets sent by the neural network, which disrupts Whisper Leak algorithms. Notably, Anthropic\u2019s models were inherently less susceptible to this attack from the start.<\/p>\n<p>If you\u2019re using a model and servers for which Whisper Leak remains a concern, you can either switch to a less vulnerable provider, or adopt additional precautions. These measures are also relevant for anyone looking to safeguard against future attacks of this type:<\/p>\n<ul>\n<li>Use local AI models for highly sensitive topics \u2014 you can follow <a href=\"https:\/\/www.kaspersky.com\/blog\/how-to-use-ai-locally-and-securely\/50576\/\" target=\"_blank\" rel=\"noopener nofollow\">our guide<\/a>.<\/li>\n<li>Configure the model to use non-streaming output where possible so the entire response is delivered at once rather than word by word.<\/li>\n<li>Avoid discussing sensitive topics with chatbots when connected to untrusted networks.<\/li>\n<li>Use a <a href=\"https:\/\/me-en.kaspersky.com\/premium?icid=me-en_bb2022-kdplacehd_acq_ona_smm__onl_b2c_kdaily_lnk_sm-team___kprem___\" target=\"_blank\" rel=\"noopener\">robust and trusted VPN provider<\/a> for greater connection security.<\/li>\n<li>Remember that the most likely point of leakage for any chat information is your own computer. Therefore, it\u2019s essential to protect it from spyware with a <a href=\"https:\/\/me-en.kaspersky.com\/premium?icid=me-en_bb2022-kdplacehd_acq_ona_smm__onl_b2c_kdaily_lnk_sm-team___kprem___\" target=\"_blank\" rel=\"noopener\">reliable security solution<\/a>\u00a0running on both your computer and all your smartphones.<\/li>\n<\/ul>\n<blockquote><p><strong>Here are some more articles explaining what other risks are associated with using AI, and how to configure AI tools properly:<\/strong><\/p>\n<ul>\n<li><a href=\"https:\/\/www.kaspersky.com\/blog\/ai-sidebar-spoofing-atlas-comet\/54769\/\" target=\"_blank\" rel=\"noopener nofollow\">AI sidebar spoofing: a new attack on AI browsers<\/a><\/li>\n<li><a href=\"https:\/\/www.kaspersky.com\/blog\/ai-browser-security-privacy-risks\/54303\/\" target=\"_blank\" rel=\"noopener nofollow\">The pros and cons of AI-powered browsers<\/a><\/li>\n<li><a href=\"https:\/\/www.kaspersky.com\/blog\/ai-chatbot-side-channel-attack\/51064\/\" target=\"_blank\" rel=\"noopener nofollow\">How hackers can read your chats with ChatGPT or Microsoft Copilot<\/a><\/li>\n<li><a href=\"https:\/\/www.kaspersky.com\/blog\/chatgpt-privacy-and-security\/54607\/\" target=\"_blank\" rel=\"noopener nofollow\">Privacy settings in ChatGPT<\/a><\/li>\n<li><a href=\"https:\/\/www.kaspersky.com\/blog\/deepseek-privacy-and-security\/54643\/\" target=\"_blank\" rel=\"noopener nofollow\">DeepSeek: configuring privacy and deploying a local version<\/a><\/li>\n<\/ul>\n<\/blockquote>\n<input type=\"hidden\" class=\"category_for_banner\" value=\"premium-generic\">\n","protected":false},"excerpt":{"rendered":"<p>The Whisper Leak attack allows its perpetrator to guess the topic of your conversation with an AI assistant \u2014 without decrypting the traffic. We explore how this is possible, and what you can do to protect your AI chats.<\/p>\n","protected":false},"author":2722,"featured_media":24980,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"_acf_changed":false,"footnotes":""},"categories":[1486],"tags":[1481,2863,2611,261,22,2822,2761,43,321,521,131],"class_list":{"0":"post-24979","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-threats","8":"tag-ai","9":"tag-anthropic","10":"tag-chatgpt","11":"tag-encryption","12":"tag-google","13":"tag-llm","14":"tag-openai","15":"tag-privacy","16":"tag-technology","17":"tag-threats","18":"tag-tips-2"},"hreflang":[{"hreflang":"en-ae","url":"https:\/\/me-en.kaspersky.com\/blog\/chatbot-eavesdropping-whisper-leak-protection\/24979\/"},{"hreflang":"en-in","url":"https:\/\/www.kaspersky.co.in\/blog\/chatbot-eavesdropping-whisper-leak-protection\/29909\/"},{"hreflang":"en-gb","url":"https:\/\/www.kaspersky.co.uk\/blog\/chatbot-eavesdropping-whisper-leak-protection\/29785\/"},{"hreflang":"es-mx","url":"https:\/\/latam.kaspersky.com\/blog\/chatbot-eavesdropping-whisper-leak-protection\/28839\/"},{"hreflang":"es","url":"https:\/\/www.kaspersky.es\/blog\/chatbot-eavesdropping-whisper-leak-protection\/31723\/"},{"hreflang":"it","url":"https:\/\/www.kaspersky.it\/blog\/chatbot-eavesdropping-whisper-leak-protection\/30365\/"},{"hreflang":"ru","url":"https:\/\/www.kaspersky.ru\/blog\/chatbot-eavesdropping-whisper-leak-protection\/40994\/"},{"hreflang":"tr","url":"https:\/\/www.kaspersky.com.tr\/blog\/chatbot-eavesdropping-whisper-leak-protection\/14112\/"},{"hreflang":"x-default","url":"https:\/\/www.kaspersky.com\/blog\/chatbot-eavesdropping-whisper-leak-protection\/54905\/"},{"hreflang":"fr","url":"https:\/\/www.kaspersky.fr\/blog\/chatbot-eavesdropping-whisper-leak-protection\/23471\/"},{"hreflang":"pt-br","url":"https:\/\/www.kaspersky.com.br\/blog\/chatbot-eavesdropping-whisper-leak-protection\/24593\/"},{"hreflang":"ru-kz","url":"https:\/\/blog.kaspersky.kz\/chatbot-eavesdropping-whisper-leak-protection\/29999\/"},{"hreflang":"en-au","url":"https:\/\/www.kaspersky.com.au\/blog\/chatbot-eavesdropping-whisper-leak-protection\/35708\/"},{"hreflang":"en-za","url":"https:\/\/www.kaspersky.co.za\/blog\/chatbot-eavesdropping-whisper-leak-protection\/35336\/"}],"acf":[],"banners":"","maintag":{"url":"https:\/\/me-en.kaspersky.com\/blog\/tag\/ai\/","name":"AI"},"_links":{"self":[{"href":"https:\/\/me-en.kaspersky.com\/blog\/wp-json\/wp\/v2\/posts\/24979","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/me-en.kaspersky.com\/blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/me-en.kaspersky.com\/blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/me-en.kaspersky.com\/blog\/wp-json\/wp\/v2\/users\/2722"}],"replies":[{"embeddable":true,"href":"https:\/\/me-en.kaspersky.com\/blog\/wp-json\/wp\/v2\/comments?post=24979"}],"version-history":[{"count":1,"href":"https:\/\/me-en.kaspersky.com\/blog\/wp-json\/wp\/v2\/posts\/24979\/revisions"}],"predecessor-version":[{"id":24981,"href":"https:\/\/me-en.kaspersky.com\/blog\/wp-json\/wp\/v2\/posts\/24979\/revisions\/24981"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/me-en.kaspersky.com\/blog\/wp-json\/wp\/v2\/media\/24980"}],"wp:attachment":[{"href":"https:\/\/me-en.kaspersky.com\/blog\/wp-json\/wp\/v2\/media?parent=24979"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/me-en.kaspersky.com\/blog\/wp-json\/wp\/v2\/categories?post=24979"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/me-en.kaspersky.com\/blog\/wp-json\/wp\/v2\/tags?post=24979"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}