15 January 2021
Medical practice requires more and more interpretation of digital images. The use of AI to read those images has long been predicted but has proved trickier than many imagined to bring into actual clinical practice. Currently, AI is still mostly used for measuring standard markers in images - for example, the volume or shape of a lesion. Išgum: ‘Our aim is to automate that practice further, make it more reproduceable, to relieve the workload of experts – but we also want to develop methods that do image-based diagnosis and prognosis, without intermediate steps.’ Experts also hope that AI will soon be able to offer risk prognosis, whether someone is at risk of a heart attack, for example, and also predict the outcome of interventions, such as what effect a given medication would have.
A question of data
As with all other areas of AI development, in order to achieve the best results, high quality data sets are vital. The main problem regarding data gathered in hospitals, as opposed to some other areas of AI image analysis, is that it is not from the public domain and there are myriad issues relating to privacy and patients’ rights that still need to be addressed before the information can be shared among researchers. Išgum: ‘Five years ago many experts said “In five years this will all be automated in hospitals” and that obviously hasn’t happened yet, and one of the key areas that is slowing us down is limited data sets. Our data is very often comprised of one group of people with defined limits of included pathology and analysed by a single expert, whereas in real clinical application you see so much more variety in the data.’
Interaction with experts
As well as designing the tools, Išgum is looking into ways of implementing them in clinical routine. She believes that it is essential that the tools fit seamlessly into current clinical workflow, rather than expecting the workflows to adjust to the tools. AI technology for image analysis is currently available in a number of medical fields, but pressing questions remain with regard to their implementation.
Perhaps most importantly we need to look at how experts will interact with these tools, how will it affect their decision-making. And we need a wide range of scientific expertise, because it also involves questions of law, such as data privacy issues, or areas covered by the humanities and the social sciences, such as ethics and communication - how do the patient, the expert and the software communicate with each other – so it’s a deeply interdisciplinary undertaking.Ivana Išgum
In order to gain the best understanding of how the tools she designs will be used in practice, Išgum also participates in medical conferences specific to the areas for which the tools are made. ‘I really want my work to make a contribution to the clinical field, so I need to keep up with all relevant developments. Also, by doing so, I know what to work on, because it’s important to work on things that are clinically interesting. I need to understand the problems and challenges of clinicians, so I can design software that is going to be truly useful.’
Eye on the future
As to the future, Išgum says that the science fiction image of having an AI analyse a scan of a human body and diagnose any problem it finds won’t be with us for a while yet. ‘It’s a wonderful idea, but there are many challenges associated with it - for example, overtreatment. Because if you see something that you weren’t looking for and aren’t sure is relevant, it may be unclear whether it should be treated. And treatment can always have knock-on effects, so we’re not sure of what the consequences of such a thing might be, because clinical work isn’t currently done like that. It might turn out to be ideal, eventually, because late diagnosis is such a problem now, and in principle that should help remove that possibility, but it might bring about problems we haven’t yet anticipated. There’s so much potential in this field, but so far to go still.’