Artificial Intelligence, data protection and medical device regulations: squaring the circle with a historical perspective in Europe

When Europe started moving from an “industrial economy” to a so called “knowledge economy” [6], raw data became the new coil, and in order to generate value, there was a clear need to transform raw data into intellectual capital, innovation, information, education, training, research and development of new technologies. Therefore, again, it is not surprising if principles for data protection were defined with the European Union’s Data Protection Directive 95/46/EC in 1995 (two years after the MD Regulation), which was then replaced in 2016 (one year before MDR) with the General Data Protection Regulation (GDPR), which came into effect in 2018.

5.1 Legal based of fair free movement of data in Europe: GDPR

Regulation 2016/679 on the Protection of natural persons with regard to the processing of personal data and on the free movement of such data, commonly called General Data Protection Regulation (GDPR), then enforced the 25th of May 2018. The GDPR defines individuals’ fundamental rights in the digital age; the obligations of those processing data (i.e., collecting, storing, sharing, analyzing data, etc.); methods and costs for ensuring compliance and sanctions for those in breach of the rules [7]. Even if the referral to AI is not explicit, the GDPR is relevant for AI, because it includes the roles for fair data management and explanatory automated decision making and because it establishes the need for data protection impact assessments. However, GDPR is not an AI regulation; in fact, it is not specific to AI and does not cover AI aspects, except the process of personal data that AI uses. It is worth noticing that although the ambition of this directive was to regulate the free movement of data, to the extent that this was clearly mentioned in the title, too, there is a common impression that this regulation is mainly intended to protect data and limit their exchange. This resulted in many challenges, creating the need for further regulations, specifically in the sharing of health data.

5.2 The infrastructure for sharing health data: EHDS

Now that AI is promising to revolutionize many sectors, including medicine, in parallel with the need for a shared market of coil and steel, Europe is in huge need of sharing health data in order to maintain competitiveness and, in some areas, the leadership in medical research and MD manufacturing sector. The COVID-19 pandemic contributed to creating awareness in this regard. In 2020, it emerged a strong discussion around the need for the European Health Data Space (EHDS), aimed at creating a unified and interoperable ecosystem for health data management and exchange to improve healthcare outcomes, foster medical research, and enhance healthcare delivery, facilitating the secure and standardized sharing of health data across EU member statesFootnote 1. A core element of the EHDS is the distinction between primary use (clinical care) and secondary use (research, policy-making). The proposal of primary use is based on the building of a voluntary infrastructure named MyHealth@EU, not touching the national rules but prevising a new common legislative framework, where it is expected to provide better access to and exchange of electronic health data, with the final goal of providing better health outcomes, better evidence and saved costs. For secondary use, it establishes common EU rules on permits and common safeguards and unlocks the health sector’s data economy potential through evidence-based policy-making and regulatory activities [8].

A further key element of the EHDS is the proposal of an “opt-in/opt-out” mechanism for using data following the GDPR. In this system, for primary use, the “opt-in” approach is required: the patient can control and, therefore, explicitly consent/dissent to the use of his data. An opt-in system is often seen as more respectful of individual autonomy and privacy, but it may result in further complexity in large dataset creation as it requires active consent registration from each individual. Conversely, for the secondary use of data is prevised an “opt-out” approach, where the individual can withdraw his consent, beyond the primary purpose of healthcare delivery, e.g., for research purposes. An opt-out system may lead to larger datasets creation as it does not require active consent from each individual. However, it may raise concerns about privacy and the awareness of individuals regarding the use of their data. The specific implementation of these mechanisms within the EHDS would need to carefully consider the balance between enabling robust research and innovation and safeguarding individual rights and privacy, which is a reflection of a more complex balance among citizens’ right: right to the privacy vs. the right to have more effective medicine, as result of impactful research [9].

EHDS is a pillar for the European Health strategy aimed at overpassing the rules existing fragmentation for building an overarching infrastructure for better health data management which can help in facing healthcare emergencies as the Covid-19 one. However, even the EHDS presents its limitations, in particular for the protection of sensitive information across borders, interoperability and data standardization because varied systems and practices among healthcare providers may hinder the seamless exchange and use of health data and the difference in national legislation can impede the harmonization of health data governance.

5.3 Regulating AI for medicine: the AI-act

The above scenario is completed by the recent news regarding the fact that the EU Parliament and Council reached a deal on comprehensive rules for trustworthy AI. In fact, the AI-Act is the first-ever comprehensive legal framework on Artificial Intelligence worldwide. It was proposed by the EU Commission in April 2021, published for feedback and in December 2023 it was reached a political agreement between the EU Parliament and the Council. Based on this political agreement, the new regulation is expected to be voted within 2024, and enforced after three years (i.e., by 2027) as per any other EU Regulation. Yet, four years in this domain can be too long, therefore, readers may like to familiarise with the key elements of this regulation, which can be already deepened basing on the available AI-act documents. The main new elements of the provisional agreementFootnote 2 which are related to medical applications are three: AI definition, AI EU regulation scope and perimeter, and the adoption of a risk-based approach as per MDR.

Regarding the AI definition, the AI-act will adopt a specific AI definition, distinguishing AI from simpler software systems. The scope and perimeter of the AI-act has also been defined by the political agreement reached in December 2023, which clearly clarified that this regulation does not apply to areas outside the scope of EU law and exclusively demanded to member states, and clarification that the AI-Act will not apply to systems used for the sole purpose of research and innovation, or for people using AI for non-professional reasons.

Finally, as per MDR, the AI-act will adopt a risk-based approach for AI. The Ai-act defines clearly that AI applications that will not be permitted in EU. This includes AI applications for social scoring, for face recognition in public spaces, and for people manipulations. Yet, there are high risk AI applications, such as AI applications for educations, employment, justice, immigration, which are permitted, pending a conformity assessment, in parallel with Class II and Class III medical devices. The path for the conformity assessment will have to be carefully defined, but an equivalent of Notifying Body will have to be involved, after an accreditation procedure. Moreover, the AI-act will regulate limited-risk AI applications, such as chat bot, deep fakes, emotion recognition systems not entering in the definition of unacceptable or high risk. For those applications, transparency will be crucial, with manufacturers required to use clear labelling, or disclosure that content has been manipulated [10]. In line with the GDPR which already requires controllers processing personal data to be transparent about use of profiling and automated decision-making. Finally, minimal-risk AI applications will include AI use for spam filters or videogames, and for those applications the adoption of a code of conduct will be required, requesting manufacturers to adopt a code of conduct and avoiding any complicated CAP (Conformity Assessment Procedure).

5.4 The contribution of the global community of biomedical engineers and clinical engineering

The European community of biomedical and clinical engineering has been proactive contributing to the development of the legal framework depicted in this paper. Founded in 2023, the European Alliance of Medical and Biological Engineering and Science (EAMBES; https://eambes.org/), a non-for-profit NGO, federates 67 biomedical engineering scientific societies and research institutions from 31 European countries, representing the European ecosystem of clinical and biomedical engineering. EAMBES is a member of the International Federation for Medical and Biological Engineering (IFMBE; https://ifmbe.org/), which affiliates 6 international (IEEE MBES, AAMI, ACCE, CAHTMA, CORAL and EAMBES) and 77 national scientific societies from 74 countries, representing the global ecosystem of clinical and biomedical engineering. In this role, EAMBES supports the European Parliament and Commission in shaping the regulatory framework impacting patient safety and medical devices. Founded in 1959 in the UNESCO building in Paris, the IFMBE is a non-for-profit NGO in official relations with the UN World Health Organization (WHO), working with several UN agencies (UNESCO, ILO, WHO). Since their foundation, EAMBES and IFMBE have helped European and international policymakers in shaping the regulatory frameworks pertaining medical devices and allied technologies. In particular, since 2015, EAMEBS has been continuously supporting the European Parliament and the European Commission in developing relevant reports, such as the report on the Economic and Social impact of BME (Published in the Union Journal Eur-Lex, 2015/C 291/07), regulations, such as the Medical Device Regulations (2017/745 and 2017/746), the EHDS and the AI-Act, and in the creation of the first European Parliament Interest Group on Biomedical EngineeringFootnote 3 [11]. While BME community maintains its fingerprint and roots deeply into research, teaching and innovation, the fast evolution of medical device field requested those organizations to assume a growing and proactive role in supporting policymakers for shaping regulations with a significant impact on patient health and well-being.

留言 (0)

沒有登入
gif