{"id":21095,"date":"2023-05-15T09:33:20","date_gmt":"2023-05-15T13:33:20","guid":{"rendered":"https:\/\/me-en.kaspersky.com\/blog\/getting-ready-for-deep-fake-threats\/21095\/"},"modified":"2023-05-29T15:19:11","modified_gmt":"2023-05-29T11:19:11","slug":"getting-ready-for-deep-fake-threats","status":"publish","type":"post","link":"https:\/\/me-en.kaspersky.com\/blog\/getting-ready-for-deep-fake-threats\/21095\/","title":{"rendered":"How to get ready for deepfake threats"},"content":{"rendered":"<p>Deepfake is the name given technology that creates convincing copies of images, videos and voices using AI. Deepfake technologies have been developing rapidly for about five years already. The idea of \u200b\u200bcreating fakes by combining real and generated data is not new. But it\u2019s the use of neural networks and deep learning that has allowed researchers to automate this process and apply it to images, video and audio formats.<\/p>\n<p>In the past, the quality of such fakes was low, and they were easily detected by the naked eye; now it\u2019s become much more difficult to recognize a fake. This is exacerbated by a reduction in the cost of information storage and processing and the emergence of open source software. This trend makes deepfake one of the most dangerous technologies of the future.<\/p>\n<h2>How real can it look?<\/h2>\n<p>In July 2021 enthusiasts published a deepfake video of Morgan Freeman talking about the perception of reality.<\/p>\n<p><span class=\"embed-youtube\" style=\"text-align:center; display: block;\"><iframe class=\"youtube-player\" type=\"text\/html\" width=\"640\" height=\"390\" src=\"https:\/\/www.youtube.com\/embed\/oxXpB9pSETo?version=3&amp;rel=1&amp;fs=1&amp;showsearch=0&amp;showinfo=1&amp;iv_load_policy=1&amp;wmode=transparent\" frameborder=\"0\" allowfullscreen=\"true\"><\/iframe><\/span><\/p>\n<p>It looks very realistic, but it\u2019s not Morgan Freeman. Facial expressions, hair\u2026 all that is of a high quality and there are even no noticeable video artifacts. It\u2019s a well-made deepfake, and it shows how easy it has become to deceive our perception of reality.<\/p>\n<h2>What\u2019s the danger?<\/h2>\n<p>The first and most obvious area where deepfake immediately found its place was pornography. Celebrities were the first to suffer from this, but even lesser-known folks began to worry about it. Many different scenarios were assumed: school bullying, fraudulent phone calls with requests to transfer money, extortion from company managers by blackmail, industrial espionage. Early on it was viewed as a <em>potential<\/em> threat; now it\u2019s for real.<\/p>\n<p><a href=\"https:\/\/www.wsj.com\/articles\/fraudsters-use-ai-to-mimic-ceos-voice-in-unusual-cybercrime-case-11567157402\" target=\"_blank\" rel=\"nofollow noopener\">The first known case<\/a> of an attack on a business was in 2019. Scammers used voice-changing technology to rob a British energy company. The attacker impersonated the CEO and tried to steal \u20ac220,000. <a href=\"https:\/\/www.forbes.com\/sites\/thomasbrewster\/2021\/10\/14\/huge-bank-fraud-uses-deep-fake-voice-tech-to-steal-millions\/?sh=42bdebd47559\" target=\"_blank\" rel=\"nofollow noopener\">The second known case<\/a> was in 2020 in the UAE when, also using voice deepfake, attackers managed to deceive a bank manager and steal $35 million! The scammers moved from emails and social media profiles to more advanced methods of attack using voice deepfake. Another interesting <a href=\"https:\/\/www.euronews.com\/next\/2022\/08\/24\/binance-executive-says-scammers-created-deepfake-hologram-of-him-to-trick-crypto-developer\" target=\"_blank\" rel=\"nofollow noopener\">similar case<\/a> became known in 2022, when scammers tried to fool the largest cryptocurrency platform, Binance. The Binance executive was surprised when he started receiving thank-you messages about a Zoom meeting he never attended. Using his public images, the attackers managed to generate a deepfake and successfully use it during an online meeting.<\/p>\n<p>Thus, in addition to traditional cyberfraud techniques such as phishing, we now have a new one \u2014 deepfake fraud. And it can be used to augment traditional social engineering schemes, for disinformation, blackmailing and espionage.<\/p>\n<p>According to an <a href=\"https:\/\/www.ic3.gov\/Media\/Y2022\/PSA220628\" target=\"_blank\" rel=\"nofollow noopener\">FBI alert<\/a>, HR managers have already met with deepfakes that were used by cybercriminals while applying for remote work. Attackers can use images of people found on the internet to create deepfakes, and then use stolen personal data to trick HR managers into hiring them. This may allow them to get access to employer data, and even unleash malware in corporate infrastructure. Potentially any business can be at risk of this type of fraud.<\/p>\n<p>And those are just the most obvious areas where deepfake fraud can be applied. But we all know that attackers are constantly inventing new ways to use such attacks.<\/p>\n<h2>How real is the danger?<\/h2>\n<p>All that sounds quite creepy. But is it really all that bad? Actually, not really. Creation of a high-quality deepfake is an expensive process.<\/p>\n<p>First, to make a deepfake a lot of data is needed: the more diverse the data set that\u2019s used, the more convincing the deepfake we can make. If we\u2019re talking about still images, this means that for a quality fake original photos need to be shot from different angles, with different settings of brightness and lighting, and different facial expressions of the subject. Also, a fake snapshot would need to be manually fine-tuned (automation isn\u2019t too helpful here).<\/p>\n<p>Second \u2014 if you want to make a really indistinguishable fake, you need specialized software and lots of computing capacity; thus, you need a significant budget. Finding free software and trying to make a deepfake on your home PC will lead to unrealistic-looking results.<\/p>\n<p>The abovementioned deepfake Zoom calls are adding to the complexity of the process. Here the bad guys need not only to make a deep fake, but to create it \u201conline\u201d, while maintaining high image quality without noticeable artifacts. Indeed, there are certain applications available that allow you to make deepfakes videostream in real time, but they can be used to make a digital clone of the pre-programmed person, not to create a new fake identity. And the default choice is usually limited to famous actors (because there are a lot of their images on the internet).<\/p>\n<p>In other words, a deepfake attack is quite possible now, but such fraud is very expensive. At the same time, committing other types of fraud is usually cheaper and more accessible, so deepfake fraud is available to a very few cybercriminals (especially if we\u2019re talking about high-quality fakes).<\/p>\n<p>Of course, that\u2019s no reason to relax \u2014 the technology doesn\u2019t stand still and within a few years the threat level may increase significantly. There\u2019ve already been <a href=\"https:\/\/techcrunch.com\/2022\/08\/24\/deepfakes-for-all-uncensored-ai-art-model-prompts-ethics-questions\/\" target=\"_blank\" rel=\"nofollow noopener\">attempts<\/a> to create deepfakes using modern popular generative models, such as stable diffusion. And such models\u00a0 allow you not only to switch faces, but also to replace objects in the image with almost anything you like.<\/p>\n<h2>Ways to protect against deepfake<\/h2>\n<p>Is there a way to protect you and your organization from deepfake fraud? Unfortunately, there\u2019s no silver bullet. We can only reduce the risk.<\/p>\n<p>As with any other social engineering method, deepfake fraud targets humans. And the human factor has always been the weakest link of any organization\u2019s security. So first of all, it\u2019s worth <a href=\"https:\/\/me-en.kaspersky.com\/enterprise-security\/security-awareness?icid=me-en_kdailyplacehold_acq_ona_smm__onl_b2b_kasperskydaily_wpplaceholder_______\" target=\"_blank\" rel=\"noopener\">educating employees<\/a> about the possibility of such attacks \u2014 explain this new threat to your colleagues, show where to look to spot a deepfake, and maybe demonstrate and publicly analyze a few cases.<\/p>\n<p>What to look for in the image:<\/p>\n<ul>\n<li>Unnatural eye movement<\/li>\n<li>Unnatural facial expressions and movements<\/li>\n<li>Unnatural hair and skin color<\/li>\n<li>Awkward facial-feature positioning<\/li>\n<li>A lack of emotion<\/li>\n<li>Excessively smooth faces<\/li>\n<li>Double eyebrows<\/li>\n<\/ul>\n<p>It\u2019s probably also a good time to strengthen your overall security processes. It\u2019s worth implementing multi-factor authentication for all processes that involve the transfer of sensitive data. And maybe implement anomaly detection technologies that allow to detect and respond to unusual user behavior.<\/p>\n<p>Also, deepfake fraud can be fought with the same tools that enable their creation: machine learning. Large companies such as Twitter or Facebook have already developed their own tools that allow detection of deepfakes, but, unfortunately, they\u2019re unavailable to the general public. Still, it shows that the cybersecurity community understands the significance of the deepfake threat and is inventing and already improving ways to protect against it.<\/p>\n<input type=\"hidden\" class=\"category_for_banner\" value=\"kasap\"><input type=\"hidden\" class=\"placeholder_for_banner\" data-cat_id=\"kasap\" value=\"44868\">\n","protected":false},"excerpt":{"rendered":"<p>Cybercriminals are increasingly using deepfakes in attacks against companies. What can we do to be safer? <\/p>\n","protected":false},"author":2738,"featured_media":21096,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"_acf_changed":false,"footnotes":""},"categories":[1318,1916],"tags":[2642,2252,489],"class_list":{"0":"post-21095","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-business","8":"category-enterprise","9":"tag-deepfake","10":"tag-deepfakes","11":"tag-social-engineering"},"hreflang":[{"hreflang":"en-ae","url":"https:\/\/me-en.kaspersky.com\/blog\/getting-ready-for-deep-fake-threats\/21095\/"},{"hreflang":"en-in","url":"https:\/\/www.kaspersky.co.in\/blog\/getting-ready-for-deep-fake-threats\/25677\/"},{"hreflang":"en-us","url":"https:\/\/usa.kaspersky.com\/blog\/getting-ready-for-deep-fake-threats\/28323\/"},{"hreflang":"en-gb","url":"https:\/\/www.kaspersky.co.uk\/blog\/getting-ready-for-deep-fake-threats\/25975\/"},{"hreflang":"x-default","url":"https:\/\/www.kaspersky.com\/blog\/getting-ready-for-deep-fake-threats\/48193\/"},{"hreflang":"ja","url":"https:\/\/blog.kaspersky.co.jp\/getting-ready-for-deep-fake-threats\/34110\/"},{"hreflang":"en-au","url":"https:\/\/www.kaspersky.com.au\/blog\/getting-ready-for-deep-fake-threats\/31983\/"},{"hreflang":"en-za","url":"https:\/\/www.kaspersky.co.za\/blog\/getting-ready-for-deep-fake-threats\/31672\/"}],"acf":[],"banners":"","maintag":{"url":"https:\/\/me-en.kaspersky.com\/blog\/tag\/deepfakes\/","name":"deepfakes"},"_links":{"self":[{"href":"https:\/\/me-en.kaspersky.com\/blog\/wp-json\/wp\/v2\/posts\/21095","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/me-en.kaspersky.com\/blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/me-en.kaspersky.com\/blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/me-en.kaspersky.com\/blog\/wp-json\/wp\/v2\/users\/2738"}],"replies":[{"embeddable":true,"href":"https:\/\/me-en.kaspersky.com\/blog\/wp-json\/wp\/v2\/comments?post=21095"}],"version-history":[{"count":2,"href":"https:\/\/me-en.kaspersky.com\/blog\/wp-json\/wp\/v2\/posts\/21095\/revisions"}],"predecessor-version":[{"id":21135,"href":"https:\/\/me-en.kaspersky.com\/blog\/wp-json\/wp\/v2\/posts\/21095\/revisions\/21135"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/me-en.kaspersky.com\/blog\/wp-json\/wp\/v2\/media\/21096"}],"wp:attachment":[{"href":"https:\/\/me-en.kaspersky.com\/blog\/wp-json\/wp\/v2\/media?parent=21095"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/me-en.kaspersky.com\/blog\/wp-json\/wp\/v2\/categories?post=21095"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/me-en.kaspersky.com\/blog\/wp-json\/wp\/v2\/tags?post=21095"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}