In the healthcare sector the use of AI – artificial intelligence – is slow, compared with other fields, but its adoption is steadily increasing. The rise of AI is set to reshape the health sector as we know it. Solutions as synthetic data and explainable artificial intelligens that can accelerate real products based on AI in the health sector were presented at the AI conference in the Science Park in Odense.
AI has an infinite potential on one hand and many ethical questions on the other. How do we apply artificial intelligence in practice without breaking the rules of ethics, GDPR and personal sensitive information? This dilemma was mentioned by several Danish and international speakers when Health Innovation Centre of Southern Denmark and Welfare Tech in collaboration with the Innovation Network Danish Healthtech organized the AI conference in Odense.
Mette Maria Skjøth, Senior Project Manager from CIMT, Odense University Hospital, was telling about an eternal focus on ethics and law. She was presenting a new Center for Clinical AI, CAI-X owned by Odense University Hospital and the University of Southern Denmark. At this Center the development projects within AI will take place in close collaboration with the lawyers in Region Syddanmark.
Replace personal data with synthetic data
But maybe the challenge of person-sensitive data is not that big? Hanan Drobiner, EU Sales Director from the company MDClone, offered the conference a possible solution.
Synthetic data and high-quality data are key to accelerating innovation in digital health, Hanan Drobiner said.
Instead of spending time and effort obtaining permissions and approvals to use human data, MDClone works with a model that, based on a pool of truly collected data, creates a pool of synthetic data. This pool of data has the same characteristics as the data collected but are no longer passable.
The synthetic data has exactly the same context, distribution and variances as the human. In general, the pools are the same – the difference is that the synthetic data is created by us, and therefore we can freely use it to develop machine learning for the benefit of employees and patients, Hanan Drobiner told at the AI conference in Odense.
Explainable AI is accelerating development
Jacob Høy Berthelsen, CEO in Enversion, believes that the development of AI is going too slowly.
-I know it is a huge job and a complicated process to develop this whole new world. But we need to have some working products in the market to raise more capital, he said at the AI conference.
He suggested XAI – eXplainable AI – as an aid. XAI refers to methods and techniques where the computer can describe how it has arrived at a given result. XAI contrasts with the concept of black box, where even designers were previously unable to explain how the computer came to the given result.
-With XAI we can always go back and double check a result, just as we can customize the design of the computer training. It should make the construction of algorithms safer and faster, Jacob Høy Berthelsen explained.
AI provides better dialogue with the patient
From the floor, Welfare Tech’s project manager Søren Parmar-Sielemann pointed out yet another benefit of eXplainable AI. In addition to being a significant support in development, XAI should also be able to strengthen the clinical staff’s dialogue with the patient, he believed.
-It must be easier for the doctor to explain the recommendations of the AI to a patient when you can go back and see where data is being extracted and what lies behind the AI diagnosis, Søren Parmar-Sieleman assessed.
He was backed by Ole Graumann, head of research and consultant at Odense University Hospital.
It is important that throughout this talk of artificial intelligence we also focus on human interaction. Contact with our patients will always be one of our primary tasks. But overall, I am incredibly happy to be on this AI wave that will help us with some of the tasks we simply do not have time for, so we have better time for patient contact, Ole Graumann said.