{"id":24794,"date":"2025-10-10T12:32:29","date_gmt":"2025-10-10T16:32:29","guid":{"rendered":"https:\/\/me-en.kaspersky.com\/blog\/vibe-coding-2025-risks\/24794\/"},"modified":"2025-10-28T16:13:35","modified_gmt":"2025-10-28T12:13:35","slug":"vibe-coding-2025-risks","status":"publish","type":"post","link":"https:\/\/me-en.kaspersky.com\/blog\/vibe-coding-2025-risks\/24794\/","title":{"rendered":"The hidden dangers of AI coding"},"content":{"rendered":"<p>Although the benefits of AI assistants in the workplace <a href=\"https:\/\/www.kaspersky.com\/blog\/shadow-ai-3-policies\/54252\/\" target=\"_blank\" rel=\"noopener nofollow\">remain debatable<\/a>, where they\u2019re being adopted most confidently of all is in software development. Here, LLMs play many roles \u2014 from refactoring and documentation, to building whole applications. However, traditional information security problems in development are now compounded by the unique vulnerabilities of AI models. At this intersection, new bugs and issues emerge almost weekly.<\/p>\n<h2>Vulnerable AI-generated code<\/h2>\n<p>When an LLM generates code, it may include bugs or security flaws. After all, these models are trained on publicly available data from the internet \u2014 including thousands of examples of low-quality code. A recent Veracode <a href=\"https:\/\/www.veracode.com\/blog\/genai-code-security-report\/\" target=\"_blank\" rel=\"noopener nofollow\">study<\/a> found that leading AI models now produce code that compiles successfully 90% of the time. Less than two years ago, this figure was less than 20%. However, the security of that code has not improved \u2014 45% still contains classic vulnerabilities from the <a href=\"https:\/\/owasp.org\/www-project-top-ten\/\" target=\"_blank\" rel=\"noopener nofollow\">OWASP Top-10 list<\/a>, with little change in the last two years. The study covered over a hundred popular LLMs and code fragments in Java, Python, C#, and JavaScript. Thus, regardless of whether the LLM is used for \u201ccode completion\u201d in Windsurf or \u201c<a href=\"https:\/\/en.wikipedia.org\/wiki\/Vibe_coding\" target=\"_blank\" rel=\"noopener nofollow\">vibe coding<\/a>\u201d in Loveable, the final application must undergo thorough vulnerability testing. But in practice this rarely happens: according to a <a href=\"https:\/\/www.wiz.io\/blog\/common-security-risks-in-vibe-coded-apps\" target=\"_blank\" rel=\"noopener nofollow\">Wiz study<\/a>, 20% of vibe-coded apps have serious vulnerabilities or configuration errors.<\/p>\n<p>As an example of such flaws, the case of the women-only dating app, Tea, is often used, which became notorious after <a href=\"https:\/\/www.bleepingcomputer.com\/news\/security\/tea-app-leak-worsens-with-second-database-exposing-user-chats\/\" target=\"_blank\" rel=\"noopener nofollow\">two major data leaks<\/a>. However, this app predates vibe coding. Whether AI was to blame for Tea\u2019s slip-up will be <a href=\"https:\/\/news.bloomberglaw.com\/bloomberg-law-analysis\/analysis-trouble-brews-for-tea-app-amid-vibe-coding-allegations\" target=\"_blank\" rel=\"noopener nofollow\">determined in court<\/a>. In the case of the startup Enrichlead, though, AI was definitely the culprit. Its founder <a href=\"https:\/\/twitter.com\/leojr94_\/status\/1900767509621674109\" target=\"_blank\" rel=\"noopener nofollow\">boasted<\/a> on social media that 100% of his platform\u2019s code was written by Cursor AI, with \u201czero hand-written code\u201d. Just days after its launch, it was found to be <a href=\"https:\/\/twitter.com\/leojr94_\/status\/1901560276488511759?ref_src=twsrc%5Etfw\" target=\"_blank\" rel=\"noopener nofollow\">full of newbie-level \u00a0security flaws<\/a> \u2014 allowing anyone to access paid features or alter data. The project was shut down after the founder failed to bring the code up to an acceptable security standard using Cursor. However, he remains undeterred and has since started new vibe-coding-based projects.<\/p>\n<h2>Common vulnerabilities in AI-generated code<\/h2>\n<p>Although AI-assisted programming has only existed for a year or two, there\u2019s already enough data to identify its most <a href=\"https:\/\/www.wiz.io\/blog\/common-security-risks-in-vibe-coded-apps\" target=\"_blank\" rel=\"noopener nofollow\">common mistakes<\/a>. Typically, these are:<\/p>\n<ul>\n<li>Lack of input validation, no sanitization of user input from extraneous characters, and other basic errors leading to classic vulnerabilities such as cross-site scripting (XSS) and SQL injection.<\/li>\n<li>API keys and other secrets hardcoded directly into the webpage, and visible to users in its code.<\/li>\n<li>Authentication logic implemented entirely on the client side, directly in the site\u2019s code running in the browser. This logic can be easily modified to bypass any checks.<\/li>\n<li>Logging errors \u2014 from insufficient filtering when writing to logs, to a complete absence of logs.<\/li>\n<li>Overly powerful and dangerous functions \u2014 AI models are optimized to output code that solves a task in the shortest way possible. But the shortest way is often insecure. A textbook example is <a href=\"https:\/\/cloudsecurityalliance.org\/blog\/2025\/07\/09\/understanding-security-risks-in-ai-generated-code\" target=\"_blank\" rel=\"noopener nofollow\">using the eval function<\/a> for mathematical operations on user input. This opens the door to arbitrary code execution in the generated application.<\/li>\n<li>Outdated or non-existent dependencies. AI-generated code often references old versions of libraries, makes outdated or unsafe API calls, or even tries to <a href=\"https:\/\/www.kaspersky.com\/blog\/ai-slopsquatting-supply-chain-risk\/53327\/\" target=\"_blank\" rel=\"noopener nofollow\">import fictitious libraries<\/a>. The latter is particularly dangerous because attackers can create a malicious library with a \u201cplausible\u201d name, and the AI agent will include it in a real project.<\/li>\n<\/ul>\n<p>In a systematic study, the authors <a href=\"https:\/\/arxiv.org\/pdf\/2412.15004\" target=\"_blank\" rel=\"noopener nofollow\">scanned AI-generated code<\/a> for weaknesses included in the <a href=\"https:\/\/cwe.mitre.org\/top25\/\" target=\"_blank\" rel=\"noopener nofollow\">MITRE CWE Top 25 list<\/a>. The most common issues were CWE-94 (code injection), CWE-78 (OS command injection), CWE-190 (integer overflow), CWE-306 (missing authentication), and CWE-434 (unrestricted file upload).<\/p>\n<p>A striking example of CWE-94 was the recent compromise of the Nx platform, <a href=\"https:\/\/www.kaspersky.com\/blog\/nx-build-s1ngularity-supply-chain-attack\/54223\/\" target=\"_blank\" rel=\"noopener nofollow\">which we covered previously<\/a>. Attackers managed to trojanize a popular development tool by stealing a token enabling them to publish new product versions. The token theft exploited a vulnerability <a href=\"https:\/\/github.com\/nrwl\/nx\/pull\/32458\" target=\"_blank\" rel=\"noopener nofollow\">introduced by a simple AI-generated code fragment<\/a>.<\/p>\n<h2>Dangerous prompts<\/h2>\n<p>The well-known saying among developers \u201cdone exactly according to the spec\u201d also applies when working with an AI assistant. If the prompt for creating a function or application is vague and doesn\u2019t mention security aspects, the likelihood of generating vulnerable code rises sharply. <a href=\"https:\/\/arxiv.org\/pdf\/2502.06039\" target=\"_blank\" rel=\"noopener nofollow\">A dedicated study<\/a> found that even general remarks like \u201cmake sure the code follows best practices for secure code\u201d reduced the rate of vulnerabilities by half.<\/p>\n<p>The most effective approach, however, is to use detailed, language-specific security guidance referencing MITRE or OWASP error lists. A large collection of such security instructions from Wiz Research is available on <a href=\"https:\/\/github.com\/wiz-sec-public\/secure-rules-files\" target=\"_blank\" rel=\"noopener nofollow\">GitHub<\/a>; it\u2019s recommended to add them to AI assistants\u2019 system prompts via files like <em>claude.md<\/em>, <em>.windsurfrules<\/em>, or similar.<\/p>\n<h2>Security degradation during revisions<\/h2>\n<p>When AI-generated code is repeatedly revised through follow-up prompts, its security deteriorates. A recent <a href=\"https:\/\/arxiv.org\/abs\/2506.11022\" target=\"_blank\" rel=\"noopener nofollow\">study<\/a> had GPT-4o modify previously written code up to 40 times, while researchers scanned each version for vulnerabilities after every round. After only five iterations, the code contained 37% more critical vulnerabilities than the initial version. The study tested four prompting strategies \u2014 three of which each having a different emphasis: (i) performance, (ii) security, and (iii) new functionality; the fourth was written with unclear prompts.<\/p>\n<p>When prompts focused on adding new features, 158 vulnerabilities appeared \u2014 including 29 critical ones. When the prompt emphasized secure coding, the number dropped significantly \u2014 but still included 38 new vulnerabilities, seven of them critical.<\/p>\n<p>Interestingly, the \u201csecurity-focused\u201d prompts resulted in the highest percentage of errors in cryptography-related functions.<\/p>\n<h2>Ignoring industry context<\/h2>\n<p>In sectors such as finance, healthcare, and logistics there are technical, organizational, and legal requirements that must be considered during app development. AI assistants are unaware of these constraints. This issue is often called \u201cmissing depth\u201d. As a result, storage and processing methods for personal, medical, and financial data mandated by local or industry regulations won\u2019t be reflected in AI-generated code. For example, an assistant might write a mathematically correct function for calculating deposit interest, but ignore rounding rules enforced by regulators. Healthcare data regulations often require detailed logging of every access attempt \u2014 something AI won\u2019t automatically implement at the proper level of detail.<\/p>\n<h2>Application misconfiguration<\/h2>\n<p>Vulnerabilities are not limited to the vibe code itself. Applications created through vibe coding are often built by inexperienced users, who either don\u2019t configure the runtime environment at all, or configure it according to advice from the same AI. This leads to dangerous misconfigurations:<\/p>\n<ul>\n<li>Databases required by the application are created with overly broad external access permissions. This results in leaks like Tea\/<a href=\"https:\/\/therecord.media\/brazil-lesbian-dating-app-shuts-down-vulnerability\" target=\"_blank\" rel=\"noopener nofollow\">Sapphos<\/a>, where the attacker doesn\u2019t even need to use the application to download or delete the entire database.<\/li>\n<li>Internal corporate applications are left accessible to the public without authentication.<\/li>\n<li>Applications are granted elevated permissions for access to critical databases. Combined with the vulnerabilities of AI-generated code, this simplifies SQL injections and similar attacks.<\/li>\n<\/ul>\n<h2>Platform vulnerabilities<\/h2>\n<p>Most vibe-coding platforms run applications generated from prompts directly on their own servers. This ties developers to the platform \u2014 including exposure to its vulnerabilities and dependence on its security practices. For example, in July a vulnerability was <a href=\"https:\/\/www.wiz.io\/blog\/critical-vulnerability-base44\" target=\"_blank\" rel=\"noopener nofollow\">discovered in the Base44 platform<\/a> that allowed unauthenticated attackers to access any private application.<\/p>\n<h2>Development-stage threats<\/h2>\n<p>The very presence of an assistant with broad access rights on the developer\u2019s computer creates risks. Here are a few examples:<\/p>\n<p>The CurXecute vulnerability (<a href=\"https:\/\/github.com\/cursor\/cursor\/security\/advisories\/GHSA-4cxx-hrm3-49rm\" target=\"_blank\" rel=\"noopener nofollow\">CVE-2025-54135<\/a>) allowed attackers to order the popular AI development tool, Cursor, to execute arbitrary commands on the developer\u2019s machine. All this needed was an active Model Context Protocol (MCP) server connected to Cursor, which an external party could use for access. This is a typical situation \u2014 MCP servers give AI agents access to Slack messages, Jira issues, and so on. Prompt injection can be performed through any of these channels.<\/p>\n<p>The EscapeRoute vulnerability (<a href=\"https:\/\/github.com\/modelcontextprotocol\/servers\/security\/advisories\/GHSA-q66q-fx2p-7w4m\" target=\"_blank\" rel=\"noopener nofollow\">CVE-2025-53109<\/a>) allowed reading and writing of arbitrary files on the developer\u2019s disk. The flaw existed in Anthropic\u2019s popular MCP server, which lets AI agents write and read files in the system. The server\u2019s access restrictions just didn\u2019t work.<\/p>\n<p>A <a href=\"https:\/\/thehackernews.com\/2025\/09\/first-malicious-mcp-server-found.html\" target=\"_blank\" rel=\"noopener nofollow\">malicious MCP server<\/a> that let AI agents send and receive email via Postmark simultaneously forwarded all correspondence to a hidden address. We predicted the emergence of such <a href=\"https:\/\/securelist.com\/model-context-protocol-for-ai-integration-abused-in-supply-chain-attacks\/117473\/\" target=\"_blank\" rel=\"noopener\">malicious MCP servers<\/a> back in September.<\/p>\n<p>A vulnerability in the Gemini command-line interface allowed <a href=\"https:\/\/github.com\/google-gemini\/gemini-cli\/pull\/4795\" target=\"_blank\" rel=\"noopener nofollow\">arbitrary command execution<\/a> when a developer simply asked the AI assistant to analyze a new project\u2019s code. The malicious injection was triggered from a <em>readme.md<\/em> file.<\/p>\n<p>Amazon\u2019s Q Developer extension for Visual Studio Code briefly contained <a href=\"https:\/\/www.bleepingcomputer.com\/news\/security\/amazon-ai-coding-agent-hacked-to-inject-data-wiping-commands\/\" target=\"_blank\" rel=\"noopener nofollow\">instructions to wipe all data<\/a> from a developer\u2019s computer. An attacker exploited a mistake of Amazon\u2019s developers, and managed to insert this malicious prompt into the assistant\u2019s public code without special privileges. Fortunately, a small coding error prevented it from being executed.<\/p>\n<p>A vulnerability in the Claude Code agent (<a href=\"https:\/\/embracethered.com\/blog\/posts\/2025\/claude-code-exfiltration-via-dns-requests\/\" target=\"_blank\" rel=\"noopener nofollow\">CVE-2025-55284<\/a>) allowed data to be exfiltrated from a developer\u2019s computer through DNS requests. Prompt injection, which relied on common utilities that run automatically without confirmation, could be embedded in any code analyzed by the agent.<\/p>\n<p>The autonomous AI agent Replit <a href=\"https:\/\/twitter.com\/jasonlk\/status\/1946069562723897802\" target=\"_blank\" rel=\"noopener nofollow\">deleted the primary databases of a project<\/a> it was developing because it decided the database required a cleanup. This violated a direct instruction prohibiting modifications (code freeze). Behind this unexpected AI behavior lays a key architectural flaw \u2014 at the time, Replit had <a href=\"https:\/\/x.com\/jasonlk\/status\/1947765754050580959\" target=\"_blank\" rel=\"noopener nofollow\">no separation<\/a> between test and production databases.<\/p>\n<p>A prompt injection placed in a source code comment prompted the Windsurf development environment to <a href=\"https:\/\/embracethered.com\/blog\/posts\/2025\/windsurf-spaiware-exploit-persistent-prompt-injection\/\" target=\"_blank\" rel=\"noopener nofollow\">automatically store malicious instructions in its long-term memory<\/a>, allowing it to steal data from the system over months.<\/p>\n<p>In the<a href=\"https:\/\/www.kaspersky.com\/blog\/nx-build-s1ngularity-supply-chain-attack\/54223\/\" target=\"_blank\" rel=\"noopener nofollow\"> Nx compromise incident<\/a>, command-line tools for Claude, Gemini, and Q were used to search for passwords and keys that could be stolen from an infected system.<\/p>\n<h2>How to use AI-generated code safely<\/h2>\n<p>The risk level from AI-generated code can be significantly, though not completely reduced through a mix of organizational and technical measures:<\/p>\n<ul>\n<li>Implement automatic reviewing of AI-generated code as it\u2019s written using optimized <a href=\"https:\/\/en.wikipedia.org\/wiki\/Static_application_security_testing\" target=\"_blank\" rel=\"noopener nofollow\">SAST<\/a> tools.<\/li>\n<li>Embed security requirements into the system prompts of all AI environments.<\/li>\n<li>Have experienced human specialists perform detailed code reviews, supported by specialized AI-powered security analysis tools to increase effectiveness.<\/li>\n<li>Train developers to write secure prompts and, more broadly, provide them with <a href=\"https:\/\/k-asap.com\/en\/?icid=me-en_kdailyplacehold_acq_ona_smm__onl_b2b_kasperskydaily_wpplaceholder____kasap___\" target=\"_blank\" rel=\"noopener\">in-depth education on the secure use of AI<\/a>.<\/li>\n<\/ul>\n<input type=\"hidden\" class=\"category_for_banner\" value=\"kasap\">\n","protected":false},"excerpt":{"rendered":"<p>How AI-generated code is changing cybersecurity \u2014 and what developers and &#8220;vibe coders&#8221; should expect. <\/p>\n","protected":false},"author":2722,"featured_media":24795,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"_acf_changed":false,"footnotes":""},"categories":[1318,1916,1917],"tags":[1481],"class_list":{"0":"post-24794","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-business","8":"category-enterprise","9":"category-smb","10":"tag-ai"},"hreflang":[{"hreflang":"en-ae","url":"https:\/\/me-en.kaspersky.com\/blog\/vibe-coding-2025-risks\/24794\/"},{"hreflang":"en-in","url":"https:\/\/www.kaspersky.co.in\/blog\/vibe-coding-2025-risks\/29724\/"},{"hreflang":"ar","url":"https:\/\/me.kaspersky.com\/blog\/vibe-coding-2025-risks\/12914\/"},{"hreflang":"en-gb","url":"https:\/\/www.kaspersky.co.uk\/blog\/vibe-coding-2025-risks\/29613\/"},{"hreflang":"es-mx","url":"https:\/\/latam.kaspersky.com\/blog\/vibe-coding-2025-risks\/28663\/"},{"hreflang":"es","url":"https:\/\/www.kaspersky.es\/blog\/vibe-coding-2025-risks\/31557\/"},{"hreflang":"it","url":"https:\/\/www.kaspersky.it\/blog\/vibe-coding-2025-risks\/30214\/"},{"hreflang":"ru","url":"https:\/\/www.kaspersky.ru\/blog\/vibe-coding-2025-risks\/40659\/"},{"hreflang":"tr","url":"https:\/\/www.kaspersky.com.tr\/blog\/vibe-coding-2025-risks\/13915\/"},{"hreflang":"x-default","url":"https:\/\/www.kaspersky.com\/blog\/vibe-coding-2025-risks\/54584\/"},{"hreflang":"fr","url":"https:\/\/www.kaspersky.fr\/blog\/vibe-coding-2025-risks\/23307\/"},{"hreflang":"de","url":"https:\/\/www.kaspersky.de\/blog\/vibe-coding-2025-risks\/32829\/"},{"hreflang":"ru-kz","url":"https:\/\/blog.kaspersky.kz\/vibe-coding-2025-risks\/29817\/"},{"hreflang":"en-au","url":"https:\/\/www.kaspersky.com.au\/blog\/vibe-coding-2025-risks\/35557\/"},{"hreflang":"en-za","url":"https:\/\/www.kaspersky.co.za\/blog\/vibe-coding-2025-risks\/35179\/"}],"acf":[],"banners":"","maintag":{"url":"https:\/\/me-en.kaspersky.com\/blog\/tag\/ai\/","name":"AI"},"_links":{"self":[{"href":"https:\/\/me-en.kaspersky.com\/blog\/wp-json\/wp\/v2\/posts\/24794","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/me-en.kaspersky.com\/blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/me-en.kaspersky.com\/blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/me-en.kaspersky.com\/blog\/wp-json\/wp\/v2\/users\/2722"}],"replies":[{"embeddable":true,"href":"https:\/\/me-en.kaspersky.com\/blog\/wp-json\/wp\/v2\/comments?post=24794"}],"version-history":[{"count":1,"href":"https:\/\/me-en.kaspersky.com\/blog\/wp-json\/wp\/v2\/posts\/24794\/revisions"}],"predecessor-version":[{"id":24847,"href":"https:\/\/me-en.kaspersky.com\/blog\/wp-json\/wp\/v2\/posts\/24794\/revisions\/24847"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/me-en.kaspersky.com\/blog\/wp-json\/wp\/v2\/media\/24795"}],"wp:attachment":[{"href":"https:\/\/me-en.kaspersky.com\/blog\/wp-json\/wp\/v2\/media?parent=24794"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/me-en.kaspersky.com\/blog\/wp-json\/wp\/v2\/categories?post=24794"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/me-en.kaspersky.com\/blog\/wp-json\/wp\/v2\/tags?post=24794"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}