{"id":24620,"date":"2025-09-03T19:48:31","date_gmt":"2025-09-03T15:48:31","guid":{"rendered":"https:\/\/me-en.kaspersky.com\/blog\/shadow-ai-3-policies\/24620\/"},"modified":"2025-09-03T19:48:31","modified_gmt":"2025-09-03T15:48:31","slug":"shadow-ai-3-policies","status":"publish","type":"post","link":"https:\/\/me-en.kaspersky.com\/blog\/shadow-ai-3-policies\/24620\/","title":{"rendered":"Three approaches to workplace &#8220;shadow AI&#8221; from the cybersecurity standpoint"},"content":{"rendered":"<p>A recent MIT report, <a href=\"https:\/\/mlq.ai\/media\/quarterly_decks\/v0.1_State_of_AI_in_Business_2025_Report.pdf\" target=\"_blank\" rel=\"nofollow noopener\">The GenAI Divide: State of AI in Business 2025<\/a>, brought on a significant cooling of tech stocks. While the report offers interesting observations on the economics and organization of AI implementation in business, it also contains valuable insights for cybersecurity teams. The authors weren\u2019t concerned with security issues: the words \u201csecurity\u201d, \u201ccybersecurity\u201d, or \u201csafety\u201d don\u2019t even appear in the report. However, its findings can and should be considered when planning new corporate AI security policies.<\/p>\n<p>The key observation is that while only 40% of surveyed organizations have purchased an LLM subscription, 90% of employees regularly use personal AI-powered tools for work tasks. And this \u201cshadow AI economy\u201d \u2014 the term used in the report \u2014 is said to be more effective than the official one. A mere 5% of corporations see economic benefit from their AI implementations, whereas employees are successfully boosting their personal productivity.<\/p>\n<p>The top-down approach to AI implementation is often unsuccessful. Therefore, the authors recommend \u201clearning from shadow usage and analyzing which personal tools deliver value before procuring enterprise alternatives\u201d. So how does this advice align with cybersecurity rules?<\/p>\n<h2>A complete ban on shadow AI<\/h2>\n<p>A policy favored by many CISOs is to test and implement\u00a0\u2014 or better yet, build one\u2019s own\u00a0\u2014 AI tools and then simply ban all others. This approach can be economically inefficient, potentially causing the company to fall behind its competitors. It\u2019s also difficult to enforce, as ensuring compliance can be both challenging and expensive. Nevertheless, for some highly regulated industries or for business units that handle extremely sensitive data, a prohibitive policy might be the only option. The following methods can be used to implement it:<\/p>\n<ul>\n<li>Block access to all popular AI tools at the network level using a network filtering tool.<\/li>\n<li>Configure a <a href=\"https:\/\/encyclopedia.kaspersky.com\/glossary\/data-loss-prevention-dlp\/\" target=\"_blank\" rel=\"noopener\">DLP<\/a> system to monitor and block data from being transferred to AI applications and services; this includes preventing the copying and pasting of large text blocks via the clipboard.<\/li>\n<li>Use an application allowlist policy on corporate devices to prevent employees from running third-party applications that could be used for direct AI access or to bypass other security measures.<\/li>\n<li>Prohibit the use of personal devices for work-related tasks.<\/li>\n<li>Use additional tools, such as video analytics, to detect and limit employees\u2019 ability to take pictures of their computer screens with personal smartphones.<\/li>\n<li>Establish a company-wide policy that prohibits the use of any AI tools except those on a management-approved list and deployed by corporate security teams. This policy should be formally documented, and employees should receive appropriate training.<\/li>\n<\/ul>\n<h2>Unrestricted use of AI<\/h2>\n<p>If the company considers the risks of using AI tools to be insignificant, or has departments that don\u2019t handle personal or other sensitive data, the use of AI by these teams can be all but unrestricted. By setting a short list of hygiene measures and restrictions, the company can observe LLM usage habits, identify popular services, and use this data to plan future actions and refine their security measures. Even with this democratic approach, it\u2019s still necessary to:<\/p>\n<ul>\n<li>Train employees on the basics of responsible AI use with the help of a cybersecurity module. A good starting place: <a href=\"https:\/\/www.kaspersky.com\/blog\/how-to-use-chatgpt-ai-assistants-securely-2024\/50562\/\" target=\"_blank\" rel=\"noopener nofollow\">our recommendations<\/a>, or <a href=\"https:\/\/k-asap.com\/en\/?icid=me-en_kdailyplacehold_acq_ona_smm__onl_b2b_kasperskydaily_wpplaceholder____kasap___\" target=\"_blank\" rel=\"noopener\">adding a specialized course to the company's security awareness platform<\/a>.<\/li>\n<li>Set up detailed application traffic logging to analyze the rhythm of AI use and the types of services being used.<\/li>\n<li>Make sure that all employees have an <a href=\"https:\/\/me-en.kaspersky.com\/enterprise-security\/endpoint-detection-response-edr?icid=me-en_kdailyplacehold_acq_ona_smm__onl_b2b_kasperskydaily_wpplaceholder_______\" target=\"_blank\" rel=\"noopener\">EPP\/EDR agent<\/a> installed on their work devices, and a <a href=\"https:\/\/me-en.kaspersky.com\/premium?icid=me-en_bb2022-kdplacehd_acq_ona_smm__onl_b2c_kdaily_lnk_sm-team___kprem___\" target=\"_blank\" rel=\"noopener\">robust security solution on their personal gadgets<\/a>. (\u201cChatGPT app\u201d has been <a href=\"https:\/\/www.kaspersky.com\/blog\/chatgpt-stealer-win-client\/47274\/\" target=\"_blank\" rel=\"noopener nofollow\">scammers\u2019 bait of choice<\/a> for spreading infostealers in 2024\u20132025.)<\/li>\n<li>Conduct regular surveys to find out how often AI is being used and for what tasks. Based on telemetry and survey data, measure the effect and risks of its use to adjust your policies.<\/li>\n<\/ul>\n<h2>Balanced restrictions on AI use<\/h2>\n<p>When it comes to company-wide AI usage, neither extreme\u00a0\u2014 a total ban or total freedom\u00a0\u2014 is likely to fit. More versatile would be a policy that allows for different levels of AI access based on the type of data being used. Full implementation of such a policy requires:<\/p>\n<ul>\n<li>A specialized AI proxy that both cleans queries on-the-fly by removing specific types of sensitive data (such as names or customer IDs), and uses role-based access control to block inappropriate use cases.<\/li>\n<li>An IT self-service portal for employees to declare their use of AI tools\u00a0\u2014 from basic models and services to specialized applications and browser extensions.<\/li>\n<li>A solution (<a href=\"https:\/\/encyclopedia.kaspersky.com\/glossary\/next-generation-firewall-ngfw\/\" target=\"_blank\" rel=\"noopener\">NGFW<\/a>, <a href=\"https:\/\/en.wikipedia.org\/wiki\/Cloud_access_security_broker\" target=\"_blank\" rel=\"nofollow noopener\">CASB<\/a>, <a href=\"https:\/\/encyclopedia.kaspersky.com\/glossary\/data-loss-prevention-dlp\/\" target=\"_blank\" rel=\"noopener\">DLP<\/a>, or other) for detailed monitoring and control of AI usage at the level of specific requests for each service.<\/li>\n<li>Only for companies that build software: modified CI\/CD pipelines and SAST\/DAST tools to automatically identify AI-generated code, and flag it for additional verification steps.<\/li>\n<li>As with the unrestricted scenario, regular employee training, surveys, and robust security for both work and personal devices.<\/li>\n<\/ul>\n<p>Armed with the listed requirements, a policy needs to be developed that covers different departments and various types of information. It might look something like this:<\/p>\n<table>\n<tbody>\n<tr>\n<td width=\"150\">Data type<\/td>\n<td width=\"150\">Public-facing AI (from personal devices and accounts)<\/td>\n<td width=\"150\">External AI service (via a corporate AI proxy)<\/td>\n<td width=\"150\">On-premise or trusted cloud AI tools<\/td>\n<\/tr>\n<tr>\n<td width=\"150\">Public data (such as ad copy)<\/td>\n<td width=\"150\">Permitted (declared via the company portal)<\/td>\n<td width=\"150\">Permitted (logged)<\/td>\n<td width=\"150\">Permitted (logged)<\/td>\n<\/tr>\n<tr>\n<td width=\"150\">General internal data (such as email content)<\/td>\n<td width=\"150\">Discouraged but not blocked. Requires declaration<\/td>\n<td width=\"150\">Permitted (logged)<\/td>\n<td width=\"150\">Permitted (logged)<\/td>\n<\/tr>\n<tr>\n<td width=\"150\">Confidential data (such as application source code, legal or HR communications)<\/td>\n<td width=\"150\">Blocked by DLP\/CASB\/NGFW<\/td>\n<td width=\"150\">Permitted for specific, manager-approved scenarios (personal data must be removed; code requires both automated and manual checks)<\/td>\n<td width=\"150\">Permitted (logged, with personal data removed as needed)<\/td>\n<\/tr>\n<tr>\n<td width=\"150\">High-impact regulated data (financial, medical, and so on)<\/td>\n<td width=\"150\">Prohibited<\/td>\n<td width=\"150\">Prohibited<\/td>\n<td width=\"150\">Permitted with CISO approval, subject to regulatory storage requirements<\/td>\n<\/tr>\n<tr>\n<td width=\"150\">Highly critical and classified data<\/td>\n<td width=\"150\">Prohibited<\/td>\n<td width=\"150\">Prohibited<\/td>\n<td width=\"150\">Prohibited (exceptions possible only with board of directors approval)<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<p>\u00a0<\/p>\n<p>To enforce the policy, a multi-layered organizational approach is necessary in addition to technical tools. First and foremost, <a href=\"https:\/\/k-asap.com\/en\/?icid=me-en_kdailyplacehold_acq_ona_smm__onl_b2b_kasperskydaily_wpplaceholder____kasap___\" target=\"_blank\" rel=\"noopener\">employees need to be trained<\/a> on the risks associated with AI\u00a0\u2014 from data leaks and hallucinations to prompt injections. This training should be mandatory for everyone in the organization.<\/p>\n<p>After the initial training, it\u2019s essential to develop more detailed policies and provide advanced training for department heads. This will empower them to make informed decisions about whether to approve or deny requests to use specific data with public AI tools.<\/p>\n<p>Initial policies, criteria, and measures are just the beginning; they need to be regularly updated. This involves analyzing data, refining real-world AI use cases, and monitoring popular tools. A self-service portal is needed as a stress-free environment where employees can explain what AI tools they\u2019re using and for what purposes. This valuable feedback enriches your analytics, helps build a business case for AI adoption, and provides a role-based model for applying the right security policies.<\/p>\n<p>Finally, a multi-tiered system for responding to violations is a must. Possible steps:<\/p>\n<ul>\n<li>An automated warning, and a mandatory micro-training course on the given violation.<\/li>\n<li>A private meeting between the employee and their department head and an information security officer.<\/li>\n<li>A temporary ban on AI-powered tools.<\/li>\n<li>Strict disciplinary action through HR.<\/li>\n<\/ul>\n<h2>A comprehensive approach to AI security<\/h2>\n<p>The policies discussed here cover a relatively narrow range of risks associated with the use of SaaS solutions for generative AI. To create a full-fledged policy that addresses the whole spectrum of relevant risks, see our <a href=\"https:\/\/www.kaspersky.com\/blog\/ai-safe-deployment-guidelines\/52789\/\" target=\"_blank\" rel=\"noopener nofollow\">guidelines for securely implementing AI systems<\/a>, developed by Kaspersky in collaboration with other trusted experts.<\/p>\n<input type=\"hidden\" class=\"category_for_banner\" value=\"mdr\"><input type=\"hidden\" class=\"placeholder_for_banner\" data-cat_id=\"mdr\" value=\"18953\">\n","protected":false},"excerpt":{"rendered":"<p>Most employees are already using personal LLM subscriptions for work tasks. How do you balance staying competitive with preventing data leaks?<\/p>\n","protected":false},"author":2722,"featured_media":24621,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"_acf_changed":false,"footnotes":""},"categories":[1318,1916],"tags":[1481,2822,1415],"class_list":{"0":"post-24620","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-business","8":"category-enterprise","9":"tag-ai","10":"tag-llm","11":"tag-machine-learning"},"hreflang":[{"hreflang":"en-ae","url":"https:\/\/me-en.kaspersky.com\/blog\/shadow-ai-3-policies\/24620\/"},{"hreflang":"en-in","url":"https:\/\/www.kaspersky.co.in\/blog\/shadow-ai-3-policies\/29516\/"},{"hreflang":"en-gb","url":"https:\/\/www.kaspersky.co.uk\/blog\/shadow-ai-3-policies\/29447\/"},{"hreflang":"es-mx","url":"https:\/\/latam.kaspersky.com\/blog\/shadow-ai-3-policies\/28569\/"},{"hreflang":"ru","url":"https:\/\/www.kaspersky.ru\/blog\/shadow-ai-3-policies\/40409\/"},{"hreflang":"tr","url":"https:\/\/www.kaspersky.com.tr\/blog\/shadow-ai-3-policies\/13763\/"},{"hreflang":"x-default","url":"https:\/\/www.kaspersky.com\/blog\/shadow-ai-3-policies\/54252\/"},{"hreflang":"fr","url":"https:\/\/www.kaspersky.fr\/blog\/shadow-ai-3-policies\/23155\/"},{"hreflang":"de","url":"https:\/\/www.kaspersky.de\/blog\/shadow-ai-3-policies\/32653\/"},{"hreflang":"ru-kz","url":"https:\/\/blog.kaspersky.kz\/shadow-ai-3-policies\/29626\/"},{"hreflang":"en-au","url":"https:\/\/www.kaspersky.com.au\/blog\/shadow-ai-3-policies\/35375\/"},{"hreflang":"en-za","url":"https:\/\/www.kaspersky.co.za\/blog\/shadow-ai-3-policies\/35004\/"}],"acf":[],"banners":"","maintag":{"url":"https:\/\/me-en.kaspersky.com\/blog\/tag\/ai\/","name":"AI"},"_links":{"self":[{"href":"https:\/\/me-en.kaspersky.com\/blog\/wp-json\/wp\/v2\/posts\/24620","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/me-en.kaspersky.com\/blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/me-en.kaspersky.com\/blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/me-en.kaspersky.com\/blog\/wp-json\/wp\/v2\/users\/2722"}],"replies":[{"embeddable":true,"href":"https:\/\/me-en.kaspersky.com\/blog\/wp-json\/wp\/v2\/comments?post=24620"}],"version-history":[{"count":0,"href":"https:\/\/me-en.kaspersky.com\/blog\/wp-json\/wp\/v2\/posts\/24620\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/me-en.kaspersky.com\/blog\/wp-json\/wp\/v2\/media\/24621"}],"wp:attachment":[{"href":"https:\/\/me-en.kaspersky.com\/blog\/wp-json\/wp\/v2\/media?parent=24620"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/me-en.kaspersky.com\/blog\/wp-json\/wp\/v2\/categories?post=24620"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/me-en.kaspersky.com\/blog\/wp-json\/wp\/v2\/tags?post=24620"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}