Reading Notes on the EU's "Draft Code of Practice on Transparency of AI-Generated Content"
Document Context and Reading Background
Document Title: “First Draft Code of Practice on Transparency of AI-Generated Content” / “《AI 生成内容透明度实践准则(草案)》”[1]
Year: December 2025
Overview: The document “First Draft Code of Practice on Transparency of AI-Generated Content” serves as one of the implementing documents accompanying Article 50 of the EU’s Artificial Intelligence Act (AI Act). This first draft of the Code of Practice on Transparency of AI-Generated Content was jointly drafted by two working groups targeting different regulatory subjects:
First, Working Group 1 (WG1) focuses on technical marking and detection of AI-generated content, corresponding to Articles 50(2) and 50(5).
Second, Working Group 2 (WG2) focuses on disclosure, labeling, and perceptible transparency for natural persons, corresponding to Articles 50(4) and 50(5).
This document does not possess direct legislative force. Its primary function is to transform the highly abstract transparency obligations in the AI Act into operable technical and organizational measures, and to provide practical reference for market surveillance authorities in assessing compliance. Adherence to this code does not automatically constitute compliance, but holds significant reference value in compliance demonstration.
1. Regulatory Objectives and Basic Assumptions
1.1 Overall Objectives
The Code of Practice continues the legislative purpose of Article 1 of the AI Act. Its objectives can be summarized as follows:
First, maintaining the trustworthiness and integrity of the information ecosystem.
Second, reducing systemic risks arising from AI-generated and manipulated content under conditions of large-scale dissemination.
Third, promoting artificial intelligence innovation while safeguarding democratic order, rule of law principles, and fundamental rights.
1.2 Implicit Regulatory Assumptions
The document contains several key premises:
First, as generative AI capabilities improve, natural persons will find it increasingly difficult to intuitively distinguish AI-generated content from human-created content.
Second, the transparency issue cannot be understood as merely a technical problem, but rather as an institutional problem jointly constituted by technology, law, and society.
Third, relying solely on the public’s individual identification capabilities is insufficient to address the risks of deepfake and manipulative content diffusion.
Finally, transparency needs to be realized through infrastructuralized mechanisms, rather than depending on the self-discipline of individual actors.
2. Responsibility Distribution Structure: Distinction Between Providers and Deployers
The Code of Practice continues the AI Act’s responsibility chain design, assigning different types of obligations to different actors.
AI system providers bear technology-centered structural obligations, focusing on making AI-generated or manipulated content detectable, verifiable, and traceable.
AI system deployers bear communication-oriented contextual obligations, focusing on ensuring that natural persons can clearly perceive the artificially generated or manipulated attributes of content upon encountering it.
This division reflects a vertical responsibility logic: the technical source is responsible for making content identifiable, while the dissemination and usage end is responsible for making it perceivable to humans.
3. Section 1: The Technical-Regulatory Logic of Provider Obligations
3.1 Multi-Layer Marking Principle
The Code of Practice explicitly rejects the notion that a single technical means can satisfy all legal requirements, instead requiring the adoption of multi-layered, redundant, and mutually complementary marking schemes. The rationale is that different content modalities have different technical constraints, there exists irresolvable tension among robustness, interoperability, cost, and content quality, and marking and circumvention remain in a constant dynamic game.
Therefore, providers should combine multiple marking mechanisms within the range of technical feasibility.
3.2 Three Core Marking Pathways
First, metadata and digital signature-based marking. This approach embeds information about generation source, operation type, and system identification into content files, using digital signatures to ensure integrity. It is suitable for structured content such as images, videos, and documents. However, its weakness lies in easy loss during platform transcoding, screenshots, or format conversion.
Second, content-embedded watermarking technology. This approach embeds markers directly into the content body in an imperceptible manner, making them difficult to remove without degrading content quality. Watermarks can be implemented at the model training stage, inference stage, or output stage, reflecting the idea of internalizing transparency mechanisms within content structure.
Third, fingerprinting, logging, and forensic detection mechanisms. This pathway serves as a supplement when active marking fails or is removed, particularly suitable for text content. Forensic detection does not rely on marking itself but makes judgments based on model characteristics, statistical signals, or training data traces, reflecting that the transparency system no longer entirely builds on providers’ voluntary marking.
3.3 Introduction of Content Provenance Chain
The Code of Practice proposes the concept of recording content provenance chains, i.e., structured recording of every step from human creation to AI generation or modification. This mechanism aims to distinguish fully AI-generated content from partially AI-modified content and provide a technical foundation for subsequent disclosure obligations and responsibility attribution.
The introduction of provenance chains marks a shift in transparency from single-point marking to process recording, representing an important institutional innovation in this document.
4. Detection and Verifiability Mechanisms
4.1 Institutionalization of Detectability
Providers must not only mark content but also ensure that marking is detectable in practice. This requirement manifests as providing free detection interfaces or tools, allowing users, platforms, and regulatory authorities to verify content sources, and retaining detection capabilities for regulatory use when companies exit the market.
This design effectively treats detection capability as a technical infrastructure with public attributes.
4.2 Interoperability and Shared Verification Mechanisms
The Code of Practice encourages collaboration among different providers to promote the formation of shared validators and open standards, avoiding fragmentation or platform lock-in of transparency mechanisms. Interoperability requires that marking and detection technologies function across platforms and environments without depending on a single ecosystem.
5. Section 2: Deployer Obligations and Perceptible Transparency
5.1 Scope of Disclosure Obligations
Deployers must disclose two categories of content: first, image, audio, or video content constituting deepfakes; second, AI-generated or manipulated text used for public affairs information dissemination that has not undergone human review. Legally authorized law enforcement uses and text that has undergone human review with editorial responsibility can be exempt.
5.2 Two-Tier Content Classification System
The Code of Practice introduces a two-tier classification system, distinguishing relevant content as fully AI-generated content and AI-assisted content. This classification is not based on technical details but oriented toward public understanding, intended to express the degree of AI involvement and potential deceptive intensity.
5.3 Icons as Regulatory Interface
Unified icons are designed as the core carrier of disclosure obligations, functioning not only to identify content attributes but also to serve as an interface between technical marking and public understanding. The long-term goal is to develop interactive EU-unified icons that allow natural persons to further understand which specific parts have been AI-generated or manipulated, while meeting accessibility requirements.
6. Accessibility and AI Literacy Requirements
The Code of Practice explicitly incorporates accessibility and AI literacy into the transparency obligation system. This includes providing alternative disclosure methods for users with different sensory capabilities, and enhancing public understanding of AI content through documentation and training. Transparency is thus viewed as social capacity building rather than a single information disclosure act.
7. Preliminary Analytical Assessment
From a regulatory design perspective, this Code of Practice adopts a technical realist stance, acknowledging the tension between existing technical capabilities and institutional objectives. Transparency is conceived as a continuously evolving infrastructure rather than a one-time compliance task. Regulation of AI-generated text is relatively cautious, preserving institutional space for human editing and responsibility bearing. Meanwhile, numerous technical details remain open, leaving room for subsequent standardization and academic research.
8. Future Research Questions
The Code raises several questions worthy of further research, including whether marking and detection mechanisms will catalyze new adversarial technological races, how content provenance chains maintain integrity in cross-platform dissemination, and whether editorial responsibility exemptions might be strategically exploited. Additionally, whether AI transparency is evolving into a new soft regulatory infrastructure deserves continued attention.
References
[1] European Commission. (2025, December 17). First draft code of practice on transparency of AI-generated content. https://digital-strategy.ec.europa.eu/en/library/first-draft-code-practice-transparency-ai-generated-content
Column: Law
Author: Wanhong Huang
Education: Master of Information Science and Technology, University of Tokyo
4 articles · 2 likes
Published on 2026-01-08 22:41 · Beijing
Tags: Law, Artificial Intelligence, International Politics