Despite its vast potential, GenAI’s fast development and deployment have outpaced the evolution of adequate legal and ethical frameworks. GenAI models can generate human-like content, including text, images, and codes: as the use of these tools in different contexts increases, commitment to the responsible use of GenAI becomes more essential, given its integration into our professional and personal lives. 

At CRIF, we are committed to developing a framework for the responsible use of AI that addresses both legal and ethical challenges, and is rooted in integrity, transparency, fairness, and a human-centric innovation ecosystem. 

 

A proposal by the European Commission to harmonize national rules

Determining liability for harm caused by GenAI systems is one of the most complex legal questions facing policymakers today. Both the European Union and United States have proposed frameworks to address accountability, but gaps remain in assigning clear responsibility. 

Traditional liability models that assign fault to a human actor do not readily apply to outputs generated autonomously by machines. If a chatbot provides harmful medical advice, or if an AI system generates defamatory or discriminatory content, it is unclear whether responsibility lies with the developer, the deployer, or the user.

The EU AI Act attempts to address this with obligations for risk assessment, human oversight, incident logging, and post-market monitoring. While these measures increase accountability, they fall short of defining a clear liability regime, especially for general-purpose models used in unpredictable ways.

Alongside the AI Act, the European Commission proposed an AI Liability Directive aimed at harmonizing national rules on non-contractual civil liability for harm caused by AI systems. The Directive sought to modernize the EU’s liability framework by introducing mechanisms that presume a link between harm and the use of high-risk AI systems when providers fail to meet their obligations, thereby easing the burden of proof for victims. However, the proposal faced resistance and was ultimately withdrawn from legislative consideration in early 2025.

The absence of binding EU-wide rules on AI liability leaves a critical gap in the legal framework, particularly in cases involving autonomous and opaque systems, where traditional fault-based liability models struggle to assign responsibility. As a result, individuals harmed by AI-generated outputs may continue to face significant procedural and evidentiary hurdles when seeking redress.

 

Current United States position on AI Liability Directive

The United States do not have a federal AI Directive about liability, rather legislation encourages a sector-specific approach, focused on promoting AI development, while regarding security concerns, they often rely on voluntary industry commitments. 

In the United States, there is also Section 230 of the Communications Decency Act, which grants online platforms and service providers immunity from liability for third-party content. Whether this protection applies to AI-generated content remains legally unclear, but it raises concerns that harmful outputs might go unpunished. Some legal experts have proposed a strict liability framework for high-risk applications, meaning providers would be liable regardless of intent or negligence. Others argue for mandatory AI insurance, or public registries for high-impact models.

Countries around the world are adopting different strategies to regulate Artificial Intelligence. The EU has taken a comprehensive legal approach, while others follow more decentralized or flexible models. These divergent strategies highlight the complexities of AI regulation. 
To help bridge the accountability gap, companies should adopt internal AI governance systems including clear usage policies, real-time monitoring of outputs, and mechanisms for redress when harm occurs.