The EU AI Act is the first regulatory framework for artificial intelligence globally. It establishes rules not only for generative AI models but also for the broader spectrum of AI-powered applications. Wiggin has been advising media, technology and entertainment companies on navigating this new regulatory reality.
The AI Act is a comprehensive market entry regulation
Compliance is mandatory for any party that introduces or deploys AI tools in the EU – irrespective of where the AI model was created. In this regard, it has ‘extraterritorial effect’.
Although the AI Act has already begun to apply this year, its different categories of provisions will enter into force in stages through to 2027. Now is the time to refine compliance strategies.
This regulation comprises risk-based rules for prohibited or high-risk AI applications with general rules for general-purpose AI models (also known as generative or genAI models) and AI applications.
For media and entertainment companies, the new framework on generative AI models and applications strikes at the core of their commercial and creative interests.
The AI Act is structured around a two-tiered regulatory approach
GPAI models and AI applications are governed separately. GPAI models — such as large language models — are foundational systems trained on vast datasets (often protected by IPR) and capable of supporting a wide range of downstream uses. AI applications (or systems) are the specific tools and services built on top of those models, such as chatbots, content-recommendation engines or dubbing applications. Under the European Commission’s guidelines, the regulatory framework on generative AI is expected to apply primarily—if not exclusively—to the well-known large GPAI models. These models will be subject to obligations concerning transparency, documentation, copyright, and more. Rightsholders and other stakeholders with legitimate interests may gain insight into the training data and operational characteristics of GPAI models.
AI application providers and deployers will mainly need to comply with rules requiring safeguards around deepfakes, synthetic content, and AI chatbot interactions with consumers.