{"id":25212,"date":"2026-02-06T15:39:42","date_gmt":"2026-02-06T11:39:42","guid":{"rendered":"https:\/\/me-en.kaspersky.com\/blog\/?p=25212"},"modified":"2026-02-06T15:39:42","modified_gmt":"2026-02-06T11:39:42","slug":"how-to-recognize-a-deepfake","status":"publish","type":"post","link":"https:\/\/me-en.kaspersky.com\/blog\/how-to-recognize-a-deepfake\/25212\/","title":{"rendered":"How to recognize a deepfake: attack of the clones"},"content":{"rendered":"<p>Technologies for creating fake video and voice messages are accessible to anyone these days, and scammers are busy mastering the art of deepfakes. No one is immune to the threat \u2014 modern neural networks can clone a person\u2019s voice from just <a href=\"https:\/\/www.cnet.com\/personal-finance\/ai-voice-clones-let-scammers-spoof-your-loved-ones-and-take-your-money\/\" target=\"_blank\" rel=\"noopener nofollow\">three to five seconds of audio<\/a>, and create highly convincing videos from a couple of photos. We\u2019ve previously discussed <a href=\"https:\/\/www.kaspersky.com\/blog\/real-or-fake-image-analysis-and-provenance\/50932\/\" target=\"_blank\" rel=\"noopener nofollow\">how to distinguish a real photo or video from a fake and trace its origin to when it was taken or generated<\/a>. Now let\u2019s take a look at how attackers create and use deepfakes in real time, how to spot a fake without forensic tools, and how to protect yourself and loved ones from \u201cclone attacks\u201d.<\/p>\n<h2>How deepfakes are made<\/h2>\n<p>Scammers gather source material for deepfakes from open sources: webinars, public videos on social networks and channels, and online speeches. Sometimes they simply call identity theft targets and keep them on the line for as long as possible to collect data for maximum-quality voice cloning. And <a href=\"https:\/\/www.kaspersky.com\/blog\/how-to-prevent-whatsapp-telegram-account-hijacking-and-quishing\/53012\/\" target=\"_blank\" rel=\"noopener nofollow\">hacking the messaging account<\/a> of someone who loves voice and video messages is the ultimate jackpot for scammers. With access to video recordings and voice messages, they can generate realistic fakes that <a href=\"https:\/\/www.telegraph.co.uk\/money\/consumer-affairs\/one-in-12-britons-victims-rise-ai-scams-clone-voice-money\/\" target=\"_blank\" rel=\"noopener nofollow\">95% of folks are unable to tell apart<\/a> from real messages from friends or colleagues.<\/p>\n<p>The tools for creating deepfakes vary widely, from simple Telegram bots to professional generators like HeyGen and ElevenLabs. Scammers use deepfakes together with social engineering: for example, they might first simulate a messenger app call that appears to drop out constantly, then send a pre-generated video message of fairly low quality, blaming it on the supposedly poor connection.<\/p>\n<p>In most cases, the message is about some kind of emergency in which the deepfake victim requires immediate help. Naturally the \u201cfriend in need\u201d is desperate for money, but, as luck would have it, they\u2019ve no access to an ATM, or have lost their wallet, and the bad connection rules out an online transfer. The solution is, of course, to send the money not directly to the \u201cfriend\u201d, but to a fake account, phone number, or cryptowallet.<\/p>\n<p>Such scams often involve pre-generated videos, but of late real-time deepfake streaming services have come into play. Among other things, these allow users to substitute their own face in a chat-roulette or video call.<\/p>\n<h2>How to recognize a deepfake<\/h2>\n<p>If you see a familiar face on the screen together with a recognizable voice but are asked unusual questions, chances are it\u2019s a deepfake scam. Fortunately, there are certain visual, auditory, and behavioral signs that can help even non-techies to spot a fake.<\/p>\n<h3>Visual signs of a deepfake<\/h3>\n<p><strong>Lighting and shadow issues.<\/strong> Deepfakes often ignore the physics of light: the direction of shadows on the face and in the background may not match, and glares on the skin may look unnatural or not be there at all. Or the person in the video may be half-turned toward the window, but their face is lit by studio lighting. This example will be familiar to participants in video conferences, where substituted background images can appear extremely unnatural.<\/p>\n<p><strong>Blurred or floating facial features.<\/strong> Pay attention to the hairline: deepfakes often show blurring, flickering, or unnatural color transitions along this area. These artifacts are caused by flaws in the algorithm for superimposing the cloned face onto the original.<\/p>\n<p><strong>Unnaturally blinking or \u201cdead\u201d eyes.<\/strong> A person blinks on average 10 to 20 times per minute. Some deepfakes blink too rarely, others too often. Eyelid movements can be too abrupt, and sometimes blinking is out of sync, with one eye not matching the other. \u201cGlassy\u201d or \u201cdead-eye\u201d stares are also characteristic of deepfakes. And sometimes a pupil (usually just the one) may twitch randomly due to a neural network hallucination.<\/p>\n<p>When analyzing a static image such as a photograph, it\u2019s also a good idea to zoom in on the eyes and compare the reflections on the irises \u2014 in real photos they\u2019ll be identical; in deepfakes \u2014 often not.<\/p>\n<div id=\"attachment_55251\" style=\"width: 1508px\" class=\"wp-caption aligncenter\"><a href=\"https:\/\/media.kasperskydaily.com\/wp-content\/uploads\/sites\/37\/2026\/02\/06150434\/how-to-recognize-a-deepfake-01.jpg\"><img loading=\"lazy\" decoding=\"async\" aria-describedby=\"caption-attachment-55251\" class=\"wp-image-55251 size-full\" src=\"https:\/\/media.kasperskydaily.com\/wp-content\/uploads\/sites\/37\/2026\/02\/06150434\/how-to-recognize-a-deepfake-01.jpg\" alt=\"How to recognize a deepfake: different specular highlights in the eyes in the image on the right reveal a fake \" width=\"1498\" height=\"1015\"><\/a><p id=\"caption-attachment-55251\" class=\"wp-caption-text\">Look at the reflections and glares in the eyes in the real photo (left) and the generated image (right) \u2014 although similar, specular highlights in the eyes in the deepfake are different. <a href=\"https:\/\/arxiv.org\/pdf\/2009.11924\" target=\"_blank\" rel=\"nofollow noopener\"> Source<\/a><\/p><\/div>\n<p><strong>Lip-syncing issues.<\/strong> Even top-quality deepfakes trip up when it comes to synchronizing speech with lip movements. A delay of just a hundred milliseconds is noticeable to the naked eye. It\u2019s often possible to observe an irregular lip shape when pronouncing the sounds <em>m<\/em>, <em>f<\/em>, or<em> t<\/em>. All of these are telltale signs of an AI-modeled face.<\/p>\n<p><strong>Static or blurred background.<\/strong> In generated videos, the background often looks unrealistic: it might be too blurry; its elements may not interact with the on-screen face; or sometimes the image behind the person remains motionless even when the camera moves.<\/p>\n<p><strong>Odd facial expressions.<\/strong> Deepfakes do a poor job of imitating emotion: facial expressions may not change in line with the conversation; smiles look frozen, and the fine wrinkles and folds that appear in real faces when expressing emotion are absent \u2014 the fake looks botoxed.<\/p>\n<h3>Auditory signs of a deepfake<\/h3>\n<p>Early AI generators modeled speech from small, monotonous phonemes, and when the intonation changed, there was an audible shift in pitch, making it easy to recognize a synthesized voice. Although today\u2019s technology has advanced far beyond this, there are other signs that still give away generated voices.<\/p>\n<p><strong>Wooden or electronic tone.<\/strong> If the voice sounds unusually flat, without natural intonation variations, or there\u2019s a vaguely electronic quality to it, there\u2019s a high probability you\u2019re talking to a deepfake. Real speech contains many variations in tone and natural imperfections.<\/p>\n<p><strong>No breathing sounds.<\/strong> Humans take micropauses and breathe in between phrases \u2014 especially in long sentences, not to mention small coughs and sniffs. Synthetic voices often lack these nuances, or place them unnaturally.<\/p>\n<p><strong>Robotic speech or sudden breaks.<\/strong> The voice may abruptly cut off, words may sound \u201cglued\u201d together, and the stress and intonation may not be what you\u2019re used to hearing from your friend or colleague.<\/p>\n<p><strong>Lack of\u2026<\/strong> <a href=\"https:\/\/en.wikipedia.org\/wiki\/Shibboleth\" target=\"_blank\" rel=\"noopener nofollow\"><strong>shibboleths<\/strong><\/a><strong> in speech. <\/strong>Pay attention to speech patterns (such as accent or phrases) that are typical of the person in real life but are poorly imitated (if at all) by the deepfake.<\/p>\n<p>To mask visual and auditory artifacts, scammers often simulate poor connectivity by sending a noisy video or audio message. A low-quality video stream or media file is the first red flag indicating that checks are needed of the person at the other end.<\/p>\n<h3>Behavioral signs of a deepfake<\/h3>\n<p>Analyzing the movements and behavioral nuances of the caller is perhaps still the most reliable way to spot a deepfake in real time.<\/p>\n<p><strong>Can\u2019t turn their head. <\/strong>During the video call, ask the person to turn their head so they\u2019re looking completely to the side. Most deepfakes are created using portrait photos and videos, so a sideways turn will cause the image to float, distort, or even break up. AI startup <a href=\"http:\/\/metaphysic.ai\" target=\"_blank\" rel=\"noopener nofollow\">Metaphysic.ai<\/a> \u2014 creators of viral Tom Cruise deepfakes \u2014 confirm that head rotation is the most reliable deepfake test at present.<\/p>\n<p><strong>Unnatural gestures.<\/strong> Ask the on-screen person to perform a spontaneous action: wave their hand in front of their face; scratch their nose; take a sip from a cup; cover their eyes with their hands; or point to something in the room. Deepfakes have trouble handling impromptu gestures \u2014 hands may pass ghostlike through objects or the face, or fingers may appear distorted, or move unnaturally.<\/p>\n<div id=\"attachment_55253\" style=\"width: 1720px\" class=\"wp-caption aligncenter\"><a href=\"https:\/\/media.kasperskydaily.com\/wp-content\/uploads\/sites\/37\/2026\/02\/06150440\/how-to-recognize-a-deepfake-02.jpg\"><img loading=\"lazy\" decoding=\"async\" aria-describedby=\"caption-attachment-55253\" class=\"wp-image-55253 size-full\" src=\"https:\/\/media.kasperskydaily.com\/wp-content\/uploads\/sites\/37\/2026\/02\/06150440\/how-to-recognize-a-deepfake-02.jpg\" alt=\"How to spot a deepfake: when a deepfake hand is waved in front of a deepfake face, they merge together \" width=\"1710\" height=\"962\"><\/a><p id=\"caption-attachment-55253\" class=\"wp-caption-text\">Ask a deepfake to wave a hand in front of its face, and the hand may appear to dissolve. <a href=\"https:\/\/youtu.be\/UPL4asSvK-4\" target=\"_blank\" rel=\"nofollow noopener\"> Source<\/a><\/p><\/div>\n<p><strong>Screen sharing. <\/strong>If the conversation is work-related, ask your chat partner to share their screen and show an on-topic file or document. Without access to your real-life colleague\u2019s device, this will be virtually impossible to fake.<\/p>\n<p><strong>Can\u2019t answer tricky questions.<\/strong> Ask something that only the genuine article could know, for example: \u201cWhat meeting do we have at work tomorrow?\u201d, \u201cWhere did I get this scar?\u201d, \u201cWhere did we go on vacation two years ago?\u201d A scammer won\u2019t be able to answer questions if the answers aren\u2019t present in the hacked chats or publicly available sources.<\/p>\n<p><strong>Don\u2019t know the codeword. <\/strong>Agree with friends and family on a secret word or phrase for emergency use to confirm identity. If a panicked relative asks you to urgently transfer money, ask them for the family codeword. A flesh-and-blood relation will reel it off; a deepfake-armed fraudster won\u2019t.<\/p>\n<h2>What to do if you encounter a deepfake<\/h2>\n<p>If you\u2019ve even the slightest suspicion that what you\u2019re talking to isn\u2019t a real human but a deepfake, follow our tips below.<\/p>\n<ul>\n<li><strong>End the chat and call back.<\/strong> The surest check is to end the video call and connect with the person through another channel: call or text their regular phone, or message them in another app. If your opposite number is unhappy about this, pretend the connection dropped out.<\/li>\n<\/ul>\n<ul>\n<li><strong>Don\u2019t be pressured into sending money.<\/strong> A favorite trick is to create a false sense of urgency. \u201cMom, I need money right now, I\u2019ve had an accident\u201d; \u201cI don\u2019t have time to explain\u201d; \u201cIf you don\u2019t send it in ten minutes, I\u2019m done for!\u201d A real person usually won\u2019t mind waiting a few extra minutes while you double-check the information.<\/li>\n<li><strong>Tell your friend or colleague they\u2019ve been hacked.<\/strong> If a call or message from someone in your contacts comes from a new number or an unfamiliar account, it\u2019s not unusual \u2014 attackers often create fake profiles or use temporary numbers, and this is yet another red flag. But if you get a deepfake call from a contact in a messenger app or your address book, inform them immediately that their account has been hacked \u2014 and do it via another communication channel. This will help them take steps to regain access to their account (see our detailed instructions for <a href=\"https:\/\/www.kaspersky.com\/blog\/telegram-account-hacked\/52775\/\" target=\"_blank\" rel=\"noopener nofollow\">Telegram<\/a> and <a href=\"https:\/\/www.kaspersky.com\/blog\/whatsapp-account-hacked\/53069\/\" target=\"_blank\" rel=\"noopener nofollow\">WhatsApp<\/a>), and to minimize potential damage to other contacts, for example, by posting about the hack.<\/li>\n<\/ul>\n<h2>How to stop your own face getting deepfaked<\/h2>\n<ul>\n<li><strong>Restrict public access to your photos and videos.<\/strong> Hide your social media profiles from strangers, limit your friends list to real people, and delete videos with your voice and face from public access.<\/li>\n<li><strong>Don\u2019t give suspicious apps access to your smartphone camera or microphone.<\/strong> Scammers can collect biometric data through fake apps disguised as games or utilities. To stop such programs from getting on your devices, use a <a href=\"https:\/\/me-en.kaspersky.com\/premium?icid=me-en_bb2022-kdplacehd_acq_ona_smm__onl_b2c_kdaily_lnk_sm-team___kprem___\" target=\"_blank\" rel=\"noopener\">proven all-in-one security solution<\/a>.<\/li>\n<li><strong>Use passkeys, unique passwords, and two-factor authentication (2FA) where possible.<\/strong> Even if scammers do create a deepfake with your face, 2FA will make it much harder to access your accounts and use them to send deepfakes. A <a href=\"https:\/\/me-en.kaspersky.com\/password-manager?icid=me-en_kdailyplacehold_acq_ona_smm__onl_b2c_kasperskydaily_wpplaceholder____kpm___\" target=\"_blank\" rel=\"noopener\">cross-platform password manager with support for passkeys and 2FA codes<\/a>\u00a0can help out here.<\/li>\n<li><strong>Teach friends and family how to spot deepfakes.<\/strong> Elderly relatives, young children, and anyone new to technology are the most vulnerable targets. Educate them about scams, show them <a href=\"https:\/\/youtu.be\/7akzhpx0EIU\" target=\"_blank\" rel=\"noopener nofollow\">examples of deepfakes<\/a>, and practice using a family codeword.<\/li>\n<li><strong>Use content analyzers.<\/strong> While there\u2019s no silver bullet against deepfakes, there are services that can identify AI-generated content with high accuracy. For graphics, these include <a href=\"https:\/\/undetectable.ai\/ai-image-detector\" target=\"_blank\" rel=\"noopener nofollow\">Undetectable AI<\/a> and <a href=\"https:\/\/app.illuminarty.ai\/\" target=\"_blank\" rel=\"noopener nofollow\">Illuminarty<\/a>; for video \u2014 <a href=\"https:\/\/deepware.ai\/\" target=\"_blank\" rel=\"noopener nofollow\">Deepware<\/a>; and for all types of deepfakes \u2014\u00a0<a href=\"https:\/\/sensity.ai\/\" target=\"_blank\" rel=\"noopener nofollow\">Sensity AI<\/a> and <a href=\"https:\/\/hivemoderation.com\/ai-generated-content-detection\" target=\"_blank\" rel=\"noopener nofollow\">Hive Moderation<\/a>.<\/li>\n<li><strong>Keep a cool head. <\/strong>Scammers apply psychological pressure to hurry victims into acting rashly. Remember the golden rule: if a call, video, or voice message from anyone you know rouses even the slightest suspicion, end the conversation and make contact through another channel.<\/li>\n<\/ul>\n<blockquote><p>To protect yourself and loved ones from being scammed, learn more about how scammers deploy deepfakes:<\/p>\n<ul>\n<li><strong><a href=\"https:\/\/www.kaspersky.com\/blog\/ai-phishing-and-scams\/54445\/\" target=\"_blank\" rel=\"noopener nofollow\">How phishers and scammers use AI<\/a><\/strong><\/li>\n<li><strong><a href=\"https:\/\/www.kaspersky.com\/blog\/how-deepfakes-threaten-kyc\/51987\/\" target=\"_blank\" rel=\"noopener nofollow\">How fraudsters bypass customer identity verification using deepfakes<\/a><\/strong><\/li>\n<li><strong><a href=\"https:\/\/www.kaspersky.com\/blog\/real-or-fake-image-analysis-and-provenance\/50932\/\" target=\"_blank\" rel=\"noopener nofollow\">Watch the (verified) birdie, or new ways to recognize fakes<\/a><\/strong><\/li>\n<li><strong><a href=\"https:\/\/www.kaspersky.com\/blog\/how-to-spot-and-prevent-boss-scams\/50861\/\" target=\"_blank\" rel=\"noopener nofollow\">Is it the boss \u2014 or is it a fraudster? Scams disguised as urgent orders from top brass<\/a><\/strong><\/li>\n<li><strong><a href=\"https:\/\/www.kaspersky.com\/blog\/audio-deepfake-technology\/48586\/\" target=\"_blank\" rel=\"noopener nofollow\">Don\u2019t believe your ears: voice deepfakes<\/a><\/strong><\/li>\n<\/ul>\n<\/blockquote>\n<input type=\"hidden\" class=\"category_for_banner\" value=\"premium-generic\">\n","protected":false},"excerpt":{"rendered":"<p>Learn how to spot deepfakes in photos, videos, voice messages, and video calls in real time.<\/p>\n","protected":false},"author":2775,"featured_media":25216,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"_acf_changed":false,"footnotes":""},"categories":[1225],"tags":[1481,2252,2719,519,2117,76,43,695,97,321,521,131],"class_list":{"0":"post-25212","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-privacy","8":"tag-ai","9":"tag-deepfakes","10":"tag-fakes","11":"tag-hacks","12":"tag-neural-networks","13":"tag-phishing","14":"tag-privacy","15":"tag-scam","16":"tag-security-2","17":"tag-technology","18":"tag-threats","19":"tag-tips-2"},"hreflang":[{"hreflang":"en-ae","url":"https:\/\/me-en.kaspersky.com\/blog\/how-to-recognize-a-deepfake\/25212\/"},{"hreflang":"en-in","url":"https:\/\/www.kaspersky.co.in\/blog\/how-to-recognize-a-deepfake\/30152\/"},{"hreflang":"en-gb","url":"https:\/\/www.kaspersky.co.uk\/blog\/how-to-recognize-a-deepfake\/30027\/"},{"hreflang":"ru","url":"https:\/\/www.kaspersky.ru\/blog\/how-to-recognize-a-deepfake\/41250\/"},{"hreflang":"x-default","url":"https:\/\/www.kaspersky.com\/blog\/how-to-recognize-a-deepfake\/55247\/"},{"hreflang":"ru-kz","url":"https:\/\/blog.kaspersky.kz\/how-to-recognize-a-deepfake\/30232\/"},{"hreflang":"en-au","url":"https:\/\/www.kaspersky.com.au\/blog\/how-to-recognize-a-deepfake\/35912\/"},{"hreflang":"en-za","url":"https:\/\/www.kaspersky.co.za\/blog\/how-to-recognize-a-deepfake\/35567\/"}],"acf":[],"banners":"","maintag":{"url":"https:\/\/me-en.kaspersky.com\/blog\/tag\/deepfakes\/","name":"deepfakes"},"_links":{"self":[{"href":"https:\/\/me-en.kaspersky.com\/blog\/wp-json\/wp\/v2\/posts\/25212","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/me-en.kaspersky.com\/blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/me-en.kaspersky.com\/blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/me-en.kaspersky.com\/blog\/wp-json\/wp\/v2\/users\/2775"}],"replies":[{"embeddable":true,"href":"https:\/\/me-en.kaspersky.com\/blog\/wp-json\/wp\/v2\/comments?post=25212"}],"version-history":[{"count":14,"href":"https:\/\/me-en.kaspersky.com\/blog\/wp-json\/wp\/v2\/posts\/25212\/revisions"}],"predecessor-version":[{"id":25229,"href":"https:\/\/me-en.kaspersky.com\/blog\/wp-json\/wp\/v2\/posts\/25212\/revisions\/25229"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/me-en.kaspersky.com\/blog\/wp-json\/wp\/v2\/media\/25216"}],"wp:attachment":[{"href":"https:\/\/me-en.kaspersky.com\/blog\/wp-json\/wp\/v2\/media?parent=25212"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/me-en.kaspersky.com\/blog\/wp-json\/wp\/v2\/categories?post=25212"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/me-en.kaspersky.com\/blog\/wp-json\/wp\/v2\/tags?post=25212"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}