Dr A.I. will see you now — The age of Artificial Intelligence in Healthcare

Google DeepMind’s breakthrough might help save the sight of millions around the world.

Rutger Hauer playing Roy Batty in Blade Runner (1982). The Lead Replicant describing his life as both rain and tears flow down his face. Credit: Warner Bros

“I’ve seen things you wouldn’t believe…”

Had he spent more time scrutinising millions of Optical Coherence Tomography (OCT) scans rather than attack ships on fire off the shoulder of Orion, perhaps Dr Roy Batty might have been the most eminent Medical Retina specialist of Ridley Scott’s fictional 2019.

Philip K. Dick’s dystopian vision was penned in 1968 and later adapted into a neo-noir masterpiece by Scott in 1982. The synthetic beings of his tale, outwardly identical to adult humans, have been created in order to replace humans in performing menial or undesirable jobs. Their only deficiencies seemingly being a lack of emotional range and a four-year life span. The themes of humanity and identity continue to resonate despite the decades which have passed since the short story was written.

We are still a long way from androids replacing any profession, let alone doctors or nurses. Nevertheless, a potentially monumental triumph in the application of AI technology in medicine has just materialised, the fruits of which might benefit millions worldwide.

Two London-based teams have collaborated to develop AI technology which can analyse OCT retinal scans and detect a number of eye conditions, then triage those patients who are in need of urgent care. Google’s DeepMind team, spearheaded by Jeffrey De Fauw, have applied a neural network learning system which matches highly experienced doctors and reduces sight loss by minimising the time between detection and treatment. This delay in referral for treatment still causes many people to go blind.

The potential AI-enhanced process to detect eye disease. Credit: DeepMind Health/Moorfields Eye Hospital

Pearse Keane, lead clinician for the project at Moorfields Eye Hospital, describes DeepMind’s algorithm:

“As good, or maybe even a little bit better, than world-leading consultant ophthalmologists at Moorfields in saying what is wrong in these OCT scans”

Artificial Brains — From Chess to Go

Google’s DeepMind, founded in 2010 in the UK and later acquired by Google, seeks to build powerful general-purpose learning algorithms and uncover the mystery of intelligence. Thus-far, its greatest tangible successes had been in defeating humans in games.

Perhaps its landmark gaming victory came in 2016 when DeepMind’s AlphaGo beat high-ranked Go player Lee Sedol 4–1 in a five-game match by using a supervised learning protocol, watching and analysing large numbers of games between humans. Despite the resounding triumph of machine over man in the ancient strategy board game, DeepMind has to thank its ancestor, IBM’s Deep Blue, for the first of such victories.

Garry Kasparov playing chess against IBM’s Deep Blue in 1997. Credit: Peter Morgan/Reuters

In 1996, world chess champion Garry Kasparov beat Deep Blue 4–2. One year later, Deep Blue came back for revenge and beet Kasparov 3½–2½. The message was clear, artificial intelligence was catching up the human intelligence. Yet Deep Blue’s algorithm depended on “brute computational force”, evaluating millions of positions. That works fine for chess, in which there are 20 possible opening moves. Go, a game originating in China almost 2500 years ago, has 361 possible opening moves on its 19×19 grid. It is so large that no AI can currently explore every possibility using Deep Blue’s “brute force” method.

LeeSeDol losing to DeepMind at Go. Credit: Korea Baduk/Reuters

DeepMind’s AlphaGo, on the other hand, works on a combination of different elements which are meant to mimic human decision-making. The algorithm was developed by DeepMind co-founder Demis Hassabis and consists of a number of phases which include supervised learning (being trained by analysing games between human experts), reinforcement learning (playing itself millions of times and maximising expected winning outcomes), “intuition” rollout policy (predicting how a human would play), Value network learning (quantifying the chances of success) and an algorithm which brings all these together called a “Monte Carlo tree search”.

The Age of Scans

A practitioner performing an OCT scan. Credit: Moorfields Eye Hospital

It’s all very good to beat humans at chess or Go, but what about diagnosing diseases? To find a real-world application for the human-like decision making used by DeepMind’s AlphaGo, the team at the company’s Health division looked at Optical Coherence Tomography (OCT) scans. This is a form of three-dimensional eye imaging which slices the retina into different layers, first introduced over two decades ago. OCT machines have come a long way since their inception and have become increasingly complex in how data is generated and presented. Nevertheless, they are used routinely by eye doctors to diagnose diseases such as age-related macular degeneration, diabetic retinopathy and glaucoma.

The publication of the work from DeepMind and Moorfields Eye Hospital in Nature this week states categorically that the algorithm performed as well as two leading retina specialists in analysing OCT scans and grading the urgency of a referral for management, with an error rate of only 5.5%. This was despite the algorithm not having access to some extra information, such as patient records, that the doctors had. The algorithm was used on two different types of OCT machines, and was also able to give confidence ratings based on aspects of the scans which it considered suggestive for diagnosis. Importantly, not a single urgent case was missed from the 14,884 scans used in the study.

Real-world application

This is just the first stage of research, although Dr Keane is confident that a final product is not too far away. DeepMind and Moorfields now need to run clinical trials of their OCT system so that doctors have the chance to test it. Mustafa Suleyman, DeepMind co-founder hopes that:

“when this is ready for deployment, which will be several years away, it will end up impacting 300,000 patients per year”

The team hopes that regulators approve a final product based on the immediate tangible benefits of a reduction in time and manpower needed to manually inspect scans, make diagnoses and refer for treatment.

Practical Benefits in the Developing World

PeekVision is a smartphone suite and includes an adapter called Peek Retina which allows the retina to be viewed with a smartphone. Credit: PeekVision.org

AI-powered screening can have an enormous impact in hard-to-reach areas. The ubiquity of smartphones around the world makes adding a portable camera and creating an image acquisition system simple and inexpensive. Already, companies such as UK-based Peek Vision have introduced camera adapters which allow high-quality images to be obtained easily and then analysed remotely.

Companies such as California-based Compact Imaging are currently working to make small form-factor multiple reference OCT (MR-OCT) available for smartphones and wearable technologies. The combination of these compact devices and AI-powered screening software could bridge geographical and economic chasms for many of the 285 million people worldwide living with some form of sight loss.

Sight loss around the world. Credit: DeepMind Health/Moorfields Eye Hospital

Artificial Intelligence elsewhere in health

These developments can act as a blueprint for the development of artificial intelligence elsewhere. DeepMind is currently doing research with University College London to assess whether AI can tell the difference between cancer and healthy tissue in CT and MRI scans. It is also working with Imperial College London to assess whether AI can interpret mammograms and improve accuracy in breast cancer screening.

In all these cases, the most practical benefit of using AI to screen for disease is one of resources — doctors’ time would be freed to spend more time with individual patients, and more time working on and providing treatments.

Pitfalls in AI’s Future

“I did everything, everything you ever asked! I created the perfect system” Clu in Tron Legacy (2010). Credit: Disney

Back to the realms of fiction, where accounts of AI are often littered with depictions of ever-evolving intelligences which strive to be perfect, such as Marvel’s Ultron or Tron’s Clu. These AIs struggle to balance a “human” rationalisation of ethics with the necessity to achieve their goal, with Earth-threatening consequences.

Though we are far from apocalypse scenarios, DeepMind itself has already been embroiled in controversy when it emerged that 1.6 million patient data records had not been adequately safeguarded when shared between London’s Royal Free Hospital and DeepMind. Data sharing agreements between the two had to be rewritten and DeepMind has also created an “Ethics & Society” group to maintain the ethical standards of AI, and ensure that social good is prioritised during the fast-moving evolution of these technologies.

Clearly, there may be obstacles ahead that no one can predict. DeepMind’s co-founder Mustafa Suleyman highlights the extent of the challenge:

“It won’t be easy: the technology sector often falls into reductionist ways of thinking, replacing complex value judgments with a focus on simple metrics that can be tracked and optimised over time….

Getting these things right is not purely a matter of having good intentions. We need to do the hard, practical and messy work of finding out what ethical AI really means.”

A future with everything to play for

Nevertheless, Suleyman describes a future which, with the right guidance, could be aided immensely by artificial intelligence when aligned with human values:

“If we manage to get AI to work for people and the planet, then the effects could be transformational. Right now, there’s everything to play for.”

A future, potentially, riding on AI. Credit: Luca D’Urbino
Share

Seeing Things

An Art Exhibition inspired by the hallucinations of Charles Bonnet Syndrome

Visual Hallucinations

It was a pleasure to attend the first day of the Seeing Things interactive art exhibition, which is taking place at the lovely Forum in Norwich over the next two weeks.

Charles Bonnet Syndrome is a type of visual hallucination which people can experience after sight loss. In comparison with other types of hallucination, those who experience these know that they are just a creation of the brain as a reaction to visual loss.

 Fascinating paintings depicting some of these hallucinations. Source: Own photo

 

The Art of Charles Bonnet

This art exhibition, set up by the NNAB, features art from people who suffer from the syndrome, as well as other visual artists who have been inspired from speaking to those who experience these vivid hallucinations, which have their own unique attributes in comparison with other types of hallucinations.

To the left – a bear statue in front of some upside-down cupcakes. Strange faces can also be a feature of the condition. Source: Own photo

Experiences

Dominic Ffytche, a world expert in the condition, gave a fantastic lecture about the Syndrome, and it was indeed fascinating to listen to the experienced of sufferers from the condition. The audience comprised of people who suffered from the condition, people who had not previously heard of the syndrome and clinicians, such as myself, who are aware about the condition but want to understand more and gain perspective.

I was particularly intrigued by the number of people who experience hallucinations of old period clothing from different eras, which seems to be a consistent feature of the syndrome. Interestingly, even when the syndrome was first described 250 years ago — the literature describes sufferers talking about people wearing period dress of the time. Perhaps 18th century formal-wear has a hallucinatory quality to it?

 Dr Dominic Ffytche, an expert in the condition, shows images of certain visual hallucinations that people experience. Source: own photo

Gaps

Even though Charles Bonnet Syndrome was first described 250 years ago, by a Swiss philosopher who was writing about his grandfather’s experiences having lost his sight to cataracts, we still do not know why exactly it happens. Certainly, we suspect that the brain fills in the gaps generated from visual loss by producing new fantastic pictures or old images which it might have stored. For many people, these hallucinations are not a problem but for some they can, understandably, be distressing. Certainly, it helps to understand these hallucinations and it is useful for both sufferers, the public and clinicians (such as yours truly) to be aware and understand this fascinating condition.

 An interesting hallucination — bear and inverse cupcakes. Source: own photo

To this end, it is fantastic to have an art exhibition which both raises awareness and bewitches us, humbling us as clinicians into realising there is still so much about the eyes and the brain that we don’t yet understand. Do you have any experience of this condition? Please feel free to comment below.

Links:
Royal National Institute of Blind People
Norfolk and Norwich Association for the Blind
NHS Charles Bonnet Syndrome Information

Share

The Eye in Mixed Reality: Using Microsoft’s Hololens to learn about the eye!

I think Mixed Reality (a hybrid of real world and virtual reality to make new environments and visualizations where real objects and digital objects co-exist and interact in realtime) is particularly useful for a subject like Ophthalmology in which the context of structures can be difficult to grasp.  It has only just come out, but watch this space for new ways to learn anatomy and surgical skills!

Share

The Physiology of the Eye: A fun way for students and patients to learn about the eye!

As an ophthalmology doctor, it is great to see these kinds of applications which can help medical and optometry students, alongside patients and the public understand how eye structures function and interact with each other. I enjoyed going through the application, playing with the well-constructed models and doing the challenging quizzes at the end of each section. Recommended for anyone who wants to learn more about The Eye.

 

Share