Artificial intelligence (AI) has rapidly evolved from a plot in science fiction novels to something that touches our lives every day. Its nearly endless applications come with nearly equal risks and rewards, and much public debate. However, it is clear that artificial intelligence in healthcare is literally changing lives. Therefore, although it may seem impossible for artificial intelligence to be universally accepted and adopted in healthcare, it is very important that we do so, and this starts with regulating the sector.
I was therefore pleased to see that, after years of debate, the European Union (EU) has finally agreed on the proposed Artificial Intelligence Law. At the heart of these regulations is the recognition of the fundamental rights that European citizens have.
A few years ago, Europe successfully passed the General Data Protection Regulation (GDPR), marking a global milestone in the protection of personal data. GDPR has proven to be a timely and necessary action to protect individuals’ privacy in the rapidly evolving online world. This legislation established a precedent of trust, allowing individuals to share their personal data securely.
The new AI laws signal lawmakers’ continued commitment to protecting fundamental rights, such as nondiscrimination, equal access, and freedom of expression, that can be challenged by new technologies. This means, in practice, that it is the responsibility of us, the leaders of AI healthcare companies, to advance regulation to continue advancing the application of precision medicine around the world. What’s next?
Building trust depends on ensuring data security
It’s almost impossible to trust something you don’t understand, and for most of us, AI is still very new. One feature of the Artificial Intelligence Act and other AI laws that will increase public confidence is new transparency requirements regarding how systems work. This includes explaining how AI systems reach their decisions and what data is used to train the system.
Having analyzed the genome profiles of more than 1.5 million patients worldwide since 2015, we are acutely aware of the challenges of keeping this data secure. Our company protects data through a rigorous combination of data architecture, pseudonymization, anonymization, minification, and segregation. In light of these new regulations, healthcare companies must continue to improve their protection frameworks to ensure the data security of millions of patients worldwide.
In addition to new transparency requirements, we expect AI healthcare companies will be required to maintain risk mitigation systems, maintain better documentation, and provide human oversight. We can help build public trust when we comply with existing regulations, adopt new rules, and demonstrate good performance on data security.
ACCELERATE PROGRESS WITH DEEPER UNDERSTANDING OF PROVIDERS
AI’s ability to transform a person’s entire genome into actionable insights that can inform diagnosis and treatment planning will one day be standard. Artificial intelligence is just beginning to revolutionize the healthcare industry as it can synthesize, analyze and exploit at scale. Artificial intelligence is the catalyst that transforms this abundance of data into invaluable insights.
Gaining the trust of doctors and researchers is crucial to gaining access to this valuable data. Companies need to explain how they apply algorithms to massive data sets to gain deep insight into how patients respond to diseases and treatments. AI is not designed to replace healthcare providers. Rather, its role in the industry is to complement the education and insight of providers to make it faster and easier for them to make data-driven decisions for their patients.
WHERE DO WE GO?
It remains unclear how the EU will implement this new regulation, and many of its components will not come into force for almost two years. Global collaboration is crucial in safely using artificial intelligence for innovations in the healthcare sector. Companies pioneering this technology need to work together and in harmony with regulatory authorities. While regulators will ultimately help unite all parties, companies already using AI need to maintain open dialogue with each other to facilitate the safe adoption of AI in healthcare and move data-driven medicine forward.