Skip to main content
While tools such as GitHub Copilot, Amazon CodeWhisperer, and similar AI coding assistants have increased developer productivity, they have simultaneously introduced critical challenges around code transparency, licence compliance, and intellectual property risk. Most development teams struggle to answer fundamental questions: Does our AI-generated code contain undeclared open source? What licence obligations have we inadvertently assumed? Are we exposed to copyright infringement claims? How do we maintain compliance without slowing development?

The Challenge of AI-Generated Components

AI models are trained on vast repositories of public code, which means they frequently produce code that closely resembles — or directly replicates — existing open source implementations. These similarities are often invisible to developers who assume the generated code is original. The problem is structural: AI models learn patterns from their training data and reproduce those patterns when prompted for similar functionality. Research demonstrates the scope of this challenge. Studies analysing LLM-generated code found that between 0.8% and 5.3% exhibits notable similarity to existing open source implementations — similarity considered sufficient to indicate copying rather than independent creation when evaluated against stringent thresholds. When evaluated using more permissive similarity thresholds, approximately 30% of AI-generated code shows at least some degree of overlap with open source codebases. These are not edge cases; they represent the routine output of mainstream AI coding tools. The implications are significant:
  • Hidden from dependency scanners: AI-generated snippets do not appear in dependency manifests, bypassing traditional Software Composition Analysis (SCA) tools that only scan declared dependencies.
  • Distributed across codebases: Rather than being concentrated in a few files, AI-generated code fragments appear throughout projects, embedded within functions that developers have otherwise written themselves.
  • Licence obligations without awareness: When an AI model generates code that matches GPL-licenced source code, developers inherit those licence obligations — even though they never consciously copied anything.
  • Undetectable by conventional means: Traditional package-level scanners miss snippet-level similarities entirely, creating blind spots in compliance programmes.
Consider the following scenario: a developer uses GitHub Copilot to implement authentication middleware. The AI generates a function that closely matches an existing open source implementation licenced under GPL-2.0. The developer commits the code without realising it is not original. The organisation now has GPL obligations it is unaware of, embedded in proprietary software. The core problem is that you cannot address what you cannot detect.

Why AI Code Transparency Matters

Several converging factors have made AI code transparency an urgent concern: Adoption at scale: GitHub reports that in files where Copilot is enabled, nearly 40% of the code is written by the AI tool, particularly in languages such as Python. AI coding assistants are now standard productivity tools for millions of developers, and the volume of AI-generated code entering production codebases has grown substantially as a result. Licence compliance risk: Most AI models were trained on open source code governed by a range of licences, including MIT, Apache, GPL, and BSD. Where AI output resembles that training data, the original licences may apply. Organisations may therefore be incurring licence obligations without knowing which open source components are present in their codebase. Copyright and IP exposure: Beyond licensing, verbatim or near-verbatim reproduction of copyrighted code creates copyright infringement risk. Research by the Software Transparency Foundation found that some AI outputs maintain substantial similarity to source material even under stringent similarity thresholds, indicating potential copyright concerns that traditional due diligence processes do not address. Regulatory requirements: Emerging regulations, including the EU AI Act and enhanced cybersecurity frameworks, increasingly require transparency into software composition. Organisations must be able to demonstrate an understanding of the code they ship, including AI-generated components and their provenance. Supply chain accountability: Customers, auditors, and business partners increasingly require Software Bills of Materials (SBOMs) that accurately reflect all components, including snippet-level open source. AI-generated code creates SBOM gaps that undermine supply chain transparency. M&A due diligence: Acquisitions and investments require comprehensive IP assessments. Undisclosed AI-generated code containing open source creates undisclosed liabilities that can complicate transactions or trigger post-close disputes.

Building a Strategy for Transparency

Managing AI code risk requires visibility beyond traditional dependency scanning. Modern DevSecOps teams need snippet-level detection capable of identifying code similarities at the fragment level, not only at the level of complete packages or files. SCANOSS provides this capability through several complementary approaches: Snippet-level detection: Identifies open source code fragments within AI-generated output, regardless of whether those fragments appear in dependency declarations. Detection operates at the function and code-block level, where AI-generated copying typically occurs. Winnowing fingerprinting: Uses fingerprinting algorithms based on the Winnowing technique to detect structural code similarity even when surface-level changes — such as reformatting, variable renaming, or comment modifications — obscure the relationship to the source. Research validates Winnowing as an effective preliminary indicator for deeper similarity analysis. Comprehensive knowledge base: Compares generated code against the SCANOSS Knowledge Base, which contains more than 27 terabytes of unique open source software indexed from more than 250 million URLs — approximately 35 times the scale of typical AI training datasets — providing broad detection coverage. Licence intelligence: When matches are detected, surfaces the associated licence obligations, copyright information, and compliance requirements for the matched open source code.

SCANOSS Tools for AI Code Detection

SCANOSS provides a suite of tools and integration points designed to provide transparency into AI-generated code:

SCANOSS Engine

The SCANOSS Engine implements Winnowing-based fingerprinting optimised for detecting code fragments. The Fast Winnowing optimisation provides up to 15× performance improvement over the baseline algorithm, enabling real-time scanning of large codebases without blocking development workflows.

SCANOSS-PY

The SCANOSS-PY command-line scanner performs snippet-level open source detection and integrates directly into local developer workflows and scripted environments.

SBOM Workbench

SBOM Workbench provides a graphical interface for reviewing snippet-level scan results, managing licence obligations, and producing SBOM output from scanned codebases.

SCANOSS-CC

SCANOSS-CC enables side-by-side visual inspection of matched code, allowing developers to examine matches line by line and make granular compliance decisions.

Pre-Commit Hooks

Pre-commit hooks automatically scan code for open source similarities before commits reach the repository, providing detection at the earliest point in the development workflow.

GitHub Actions

The SCANOSS Code Scan Action embeds snippet-level detection directly into continuous integration pipelines, enabling automated compliance checks on every pull request or push.

Next Steps

AI coding tools are now standard components of developer workflows. Organisations that do not implement AI code transparency face accumulating risk: licence obligations they cannot detect, IP exposure they cannot assess, and compliance gaps they cannot close. The sections below describe how to get started with SCANOSS, depending on your integration requirements.

Getting Started with AI Code Transparency

Not sure which tool is right for your workflow? Ask our AI assistant