Regulation and Compliance: FDA, CE Marking, and the EU AI Act

Regulation and Compliance: FDA, CE Marking, and the EU AI Act

Introduction

Clinical AI is not just a technological challenge—it is also a regulatory one. To ensure patient safety and public trust, AI-enabled medical tools must comply with evolving frameworks such as the FDA in the United States, CE marking in Europe, and the EU AI Act. Understanding these systems is essential for clinicians, administrators, and innovators deploying AI in healthcare.

Regulation in the United States: The FDA

The U.S. Food and Drug Administration (FDA) regulates AI when it functions as Software as a Medical Device (SaMD). Most AI diagnostic tools fall under this category. The FDA currently applies three main pathways:

  • 510(k) clearance: for tools substantially equivalent to existing devices.
  • De Novo pathway: for novel tools without a predicate device.
  • PMA (Premarket Approval): for high-risk, class III devices.

As of 2025, nearly 1,000 AI-enabled devices have been authorized by the FDA, the majority in radiology (e.g. lung nodule detection, mammography support).

FDA and Adaptive Algorithms

A challenge arises when AI models continuously learn after deployment. The FDA has proposed a Predetermined Change Control Plan (PCCP) that allows vendors to specify how updates can occur safely without requiring a new submission.

CE Marking and the EU Medical Device Regulation (MDR)

In Europe, AI diagnostic tools are regulated under the Medical Device Regulation (MDR). To be marketed, they must obtain a CE mark. Requirements include:

  • Clinical evaluation with evidence of safety and performance.
  • Post-market surveillance plans.
  • Technical documentation and risk management.

CE marking harmonizes access across EU member states but still relies heavily on national notified bodies for assessment.

The EU AI Act: A Global First

The EU AI Act, adopted in 2024, introduces the world’s first comprehensive AI-specific regulation. It categorizes AI systems by risk:

  • Unacceptable risk: prohibited (e.g. social scoring by governments).
  • High risk: includes most clinical AI systems, requiring strict compliance.
  • Limited or minimal risk: fewer requirements, often outside healthcare.

High-risk systems must demonstrate:

  • High-quality, representative training data.
  • Transparency and user information.
  • Human oversight mechanisms.
  • Post-market monitoring and reporting of incidents.
Non-compliance with the EU AI Act can result in fines up to €30 million or 6% of global turnover—making regulatory literacy a business-critical skill in healthcare.

Implications for Hospitals and Clinicians

  • Hospitals must ensure AI tools they deploy are CE-marked or FDA-cleared and compliant with the EU AI Act.
  • Clinicians must maintain AI literacy, as oversight and safe use are professional obligations under the Act.
  • Vendors must be transparent about validation, limitations, and monitoring plans.

Conclusion & Next Step

Regulation of clinical AI is complex and rapidly evolving. Clinicians do not need to become regulatory experts, but they must be aware of the frameworks that govern the tools they use. In the next post, we turn to the practical side of deployment: Implementing AI in Hospitals — Workflow, Change Management, and Pitfalls.