Apiiro Blog ﹥ More Code = Wider Attack Surface:…
Research, Technical

More Code = Wider Attack Surface: AI Coding Assistants Deliver Productivity at the Cost of More Endpoints and More OSS Sprawl 

Itay Nussbaum
Product Manager
Published February 3 2026 · 7 min. read

The narrative often sold by AI adoption is one of efficiency. The actual data points to one of sprawl.

We are in the midst of a reckoning with the benefits and costs of AI coding assistants. 

Data clearly points to valuable boosts in productivity and skill: A study from Anthropic highlighted a 50% increase in engineer productivity using Claude, and a research paper from the University of Chicago pointed to a 39% increase in code merges post-Cursor adoption.

But the security risks of AI are becoming more apparent as usage increases. Our own Apiiro research reveals 10x more security findings in code shipped by AI-assisted teams.

The speed of AI code adoption makes cost/benefits analyses challenging; organizations and development teams are ramping up usage faster than they can measure security impact. 

The question of AI-assisted code is no longer one of if, but how. Still, security leaders need trustworthy data to make an informed decision about how to promote risk-aware adoption. 

Apiiro’s latest research into Open Source Software Sprawl (OSS) and API attack surface growth delivers an in-depth look at the impact of explosive code growth on software architecture.


Methodology

Leading Agentic Application Security platform Apiiro conducted an analysis of thousands of code repositories and developers at a Fortune 500 enterprise, over a period of two and a half years, to determine the impact of AI code assistants on productivity and risk. The results point to a marked risk premium of AI coding assistants: AI coding assistants deliver real productivity gains, but expand the attack surface faster than security teams can manually review.

1. A Quantitative Analysis of Open Source (OSS) and API Sprawl

The following research was conducted on a Fortune 500 Apiiro customer, analyzing how adoption of AI coding assistants (specifically GitHub Copilot) correlates with the expansion of the enterprise attack surface. Data analysis of our APO and OSS inventory reveals a critical side effect: the organizational attack surface is expanding faster than security teams can manage.

 The research includes analysis of OSS dependency inventories, API endpoint catalogs, and developer behavior data across 26.5K repositories, 14K developers, 4.7M OSS package records, and nearly 80,000 APIs in code.


2. The Core Trend: Unmanaged Expansion

The productivity gains associated with AI do not lead to leaner environments. Our logs point instead to API attack surface sprawl. 

Finding A: 40% Increase in Total Entry Points

Between June and October 2025, API surface records grew from 54,627 to 79,630 (+44%), representing 53,598 unique API definitions in code across 2,033 repositories. Of these, 72.5% are internal controller actions while 27.5% (14,743) are REST-style HTTP endpoints. This represents a roughly 40% increase in the attack surface in under six months.

Finding B: Open Source Software (OSS) Sprawl

This analysis of 4.7 million OSS package records across 9,352 repositories reveals a concerning trend: organizations are experiencing significant package sprawl, where multiple packages perform the same function and dramatically expand the attack surface.

Unchecked Package Generation Creates Functional Duplicates

Across major functional categories such as database, data serialization, logging, testing, cryptography, HTTP clients, and compression, organizations are using hundreds to thousands of different packages for the same purpose.

This leads to more attack vectors, more components to patch, a higher risk of vulnerable dependencies, and greater complexity during security audits.

The data indicates continuous growth in package usage, with the fastest acceleration in:

  • Data serialization libraries (JSON, YAML, XML, Protobuf)
  • Cryptography libraries (encryption, hashing, authentication)
  • File-management utilities (glob, path, filesystem tools)
  • HTTP client libraries (request, fetch, API clients)

These trends demonstrate a steadily expanding attack surface across the entire organization.

4. The AI Coding Assistant Factor

AI coding assistants prioritize paths of least resistance when fulfilling prompts. This leads to unchecked proliferation of packages, and suggestions of unapproved or disused dependencies.

How AI Assistants Contribute to Package Sprawl

  1. Quick suggestions introduce new dependencies without checking whether alternatives already exist internally.
  2. Inconsistent recommendations cause different developers to use different packages for the same job.
  3. Outdated training data may lead AI models to suggest unmaintained or vulnerable libraries.
  4. AI assistants lack organizational context and do not know approved package lists or security policies.

Real-World Impact

  • Developer A is suggested “axios” for HTTP.
  • Developer B is suggested “node-fetch.”
  • Developer C is suggested “got.”

The result is three different HTTP libraries, creating three attack vectors and three times the remediation work.

Security Implications

  1. Higher Probability of Vulnerabilities
    1. More packages create more entry points.
    2. Each dependency brings its own subtree.
  2. Remediation Complexity
    1. More OSS means more code paths and higher risk of breaking functionality when applying fixes.
    2. Testing requirements increase, and rollback procedures become more complex.

5. AI Code Assistants Boost Velocity – at the Cost of More Dependencies and More Lines of Code

Finding C: Copilot Repos Show Higher OSS Sprawl

Our analysis reveals a direct correlation between Copilot usage and increased dependency counts:

MetricCopilot ReposNon-Copilot ReposRatio
Average commits per repo3491951.79x
Average developers per repo6.62.72.46x
Average OSS packages per repo8323972.09x
OSS packages per commit4.162.911.43x

Key Finding: Repositories with Copilot activity have 43% more OSS packages per commit than repositories without Copilot usage.

Finding D: Velocity Gains Come With Risk Costs

Velocity Evidence

MetricFindingSecurity Implication
Commits per session+18% higher with CopilotMore code = more attack surface
Top improversUp to 7x productivity gainsFaster code generation = faster sprawl
Developer improvement rate44% showed gains after CopilotWidespread acceleration of code output

Key Finding: Developers with Copilot demonstrated up to 7x productivity gains and 18% more commits among top performers, leading to faster and more prolific expansion of attack surface.


6. Conclusion: The Copilot Risk Premium

AI coding assistants deliver real productivity gains – 44% of developers showing measurable improvement, with top performers achieving up to 7x productivity multipliers.

However, this velocity comes at a cost. 

Risk IndicatorCopilot ReposNon-Copilot ReposPremium
OSS per commit4.162.91+43%
High-OSS likelihood32.5%16.1%+102%
Avg packages per repo832397+110%

Copilot-active repositories show:

  • 43% more OSS packages per commit
  • 2x higher likelihood of having 100+ dependencies
  • Positive correlation (r=0.36) between adoption and OSS sprawl

In summary, between June 2023 and November 2025

  • OSS packages grew 44% (3.3M → 4.7M)
  • API endpoints grew 44% (55K → 79K)
  • Copilot seats grew 46% (183 → 267)

How to Maximize AI-Assisted Coding Benefits – and Minimize Attack Surface Growth

1. Shift to “Inventory-First” Security (ASPM)

In an AI-driven environment, you cannot secure what you cannot see. The sheer speed of code generation requires Application Security Posture Management (ASPM) to provide a real-time map of your environment.

  • Continuous Discovery: Move away from point-in-time scans. Implement tools that provide a live, “system-of-record” inventory of every API endpoint and OSS package.
  • The “Golden Path” Inventory: Create an internal “service catalog” or “approved library list.” If an LLM suggests axios, but your organization standard is got, your tooling should flag this in the IDE before the code is even committed.
  • API Shadow Detection: Since AI creates endpoints 40% faster, you need automated detection for “Shadow APIs” – endpoints that exist in code but aren’t documented or routed through your standard WAF/Gateway.

2. Implement “Guardrails at the Prompt”

The best time to stop sprawl is before the developer hits “Merge.” This requires moving security feedback into the “Developer Moment.”

  • Context-Aware IDE Plugins: Use security plugins that “see” what the AI assistant is suggesting. If an LLM suggests a library with a high blast radius or a known vulnerability, the IDE should immediately offer the “Approved” alternative.
  • Justification-Based PR Gates: Configure your CI/CD to detect net-new dependencies. If a PR adds a third HTTP client to a repo that already has two, trigger a mandatory “Justification” field for the developer. This adds a “speed bump” that discourages mindless LLM copy-pasting.
  • Automated Reachability Analysis: Don’t just scan for CVEs. With the volume of OSS packages doubling, use reachability analysis to prioritize fixing only those vulnerabilities that are actually reachable by your application’s execution path.

3. Move from “Security Triage” to “Automated Remediation”

When AI generates code at 7x speed, security teams cannot manually triage results. You must fight AI with AI.

  • AI Security Assistants (ACSA): Deploy “Virtual Security Champions” that automatically generate fix PRs for the sprawl they detect. If the LLM introduced a redundant library, the ACSA should propose a refactor to the organizational standard.
  • Policy-as-Code (Rego/Sentinel): Hard-code your security policies (e.g., “No new serialization libraries,” “All REST endpoints must have Auth decorators”). Enforce these at the pre-commit or build stage so that non-compliant AI-generated code never reaches a reviewer.

4. Address the “Human-in-the-Loop” Problem

The research shows that developers using AI often feel less responsible for the code they “co-author.”

  • “Vibe Coding” Accountability: Implement a policy where developers must explicitly “sign off” on AI-generated blocks. This reinforces that the human is the final authority and liable for the code’s security.
  • Specific Prompting Training: Train developers to include security constraints in their prompts. Instead of asking for a “file upload handler,” they should ask for a “file upload handler with input validation, 5MB limit, and malware scanning.”

We cannot put the AI genie back in the bottle – nor, as the data shows, should we want to try – the enormous productivity boosts enabled by AI-assisted development are here to stay. 

But so are the risks, unless security leaders take steps to mitigate them.

The future of securing AI code is all about context, resilience, and accountability. Only by giving security teams the safeguards and training they need to deploy AI-assisted code responsibly, can we maximize the benefits while minimizing attack surface growth, and preventing risk.


Trusted by hundreds of global enterprises – including Shell, USAA, and BlackRock – and recognized by Gartner, IDC, and F&S as an ASPM leader, Apiiro is the Agentic Application Security platform built for the AI era.

Schedule a demo to see our continuous, risk-aware approach to securing AI-assisted code.