Dr A.I. will see you now — The age of Artificial Intelligence in Healthcare

Google DeepMind’s breakthrough might help save the sight of millions around the world.

Rutger Hauer playing Roy Batty in Blade Runner (1982). The Lead Replicant describing his life as both rain and tears flow down his face. Credit: Warner Bros

“I’ve seen things you wouldn’t believe…”

Had he spent more time scrutinising millions of Optical Coherence Tomography (OCT) scans rather than attack ships on fire off the shoulder of Orion, perhaps Dr Roy Batty might have been the most eminent Medical Retina specialist of Ridley Scott’s fictional 2019.

Philip K. Dick’s dystopian vision was penned in 1968 and later adapted into a neo-noir masterpiece by Scott in 1982. The synthetic beings of his tale, outwardly identical to adult humans, have been created in order to replace humans in performing menial or undesirable jobs. Their only deficiencies seemingly being a lack of emotional range and a four-year life span. The themes of humanity and identity continue to resonate despite the decades which have passed since the short story was written.

We are still a long way from androids replacing any profession, let alone doctors or nurses. Nevertheless, a potentially monumental triumph in the application of AI technology in medicine has just materialised, the fruits of which might benefit millions worldwide.

Two London-based teams have collaborated to develop AI technology which can analyse OCT retinal scans and detect a number of eye conditions, then triage those patients who are in need of urgent care. Google’s DeepMind team, spearheaded by Jeffrey De Fauw, have applied a neural network learning system which matches highly experienced doctors and reduces sight loss by minimising the time between detection and treatment. This delay in referral for treatment still causes many people to go blind.

The potential AI-enhanced process to detect eye disease. Credit: DeepMind Health/Moorfields Eye Hospital

Pearse Keane, lead clinician for the project at Moorfields Eye Hospital, describes DeepMind’s algorithm:

“As good, or maybe even a little bit better, than world-leading consultant ophthalmologists at Moorfields in saying what is wrong in these OCT scans”

Artificial Brains — From Chess to Go

Google’s DeepMind, founded in 2010 in the UK and later acquired by Google, seeks to build powerful general-purpose learning algorithms and uncover the mystery of intelligence. Thus-far, its greatest tangible successes had been in defeating humans in games.

Perhaps its landmark gaming victory came in 2016 when DeepMind’s AlphaGo beat high-ranked Go player Lee Sedol 4–1 in a five-game match by using a supervised learning protocol, watching and analysing large numbers of games between humans. Despite the resounding triumph of machine over man in the ancient strategy board game, DeepMind has to thank its ancestor, IBM’s Deep Blue, for the first of such victories.

Garry Kasparov playing chess against IBM’s Deep Blue in 1997. Credit: Peter Morgan/Reuters

In 1996, world chess champion Garry Kasparov beat Deep Blue 4–2. One year later, Deep Blue came back for revenge and beet Kasparov 3½–2½. The message was clear, artificial intelligence was catching up the human intelligence. Yet Deep Blue’s algorithm depended on “brute computational force”, evaluating millions of positions. That works fine for chess, in which there are 20 possible opening moves. Go, a game originating in China almost 2500 years ago, has 361 possible opening moves on its 19×19 grid. It is so large that no AI can currently explore every possibility using Deep Blue’s “brute force” method.

LeeSeDol losing to DeepMind at Go. Credit: Korea Baduk/Reuters

DeepMind’s AlphaGo, on the other hand, works on a combination of different elements which are meant to mimic human decision-making. The algorithm was developed by DeepMind co-founder Demis Hassabis and consists of a number of phases which include supervised learning (being trained by analysing games between human experts), reinforcement learning (playing itself millions of times and maximising expected winning outcomes), “intuition” rollout policy (predicting how a human would play), Value network learning (quantifying the chances of success) and an algorithm which brings all these together called a “Monte Carlo tree search”.

The Age of Scans

A practitioner performing an OCT scan. Credit: Moorfields Eye Hospital

It’s all very good to beat humans at chess or Go, but what about diagnosing diseases? To find a real-world application for the human-like decision making used by DeepMind’s AlphaGo, the team at the company’s Health division looked at Optical Coherence Tomography (OCT) scans. This is a form of three-dimensional eye imaging which slices the retina into different layers, first introduced over two decades ago. OCT machines have come a long way since their inception and have become increasingly complex in how data is generated and presented. Nevertheless, they are used routinely by eye doctors to diagnose diseases such as age-related macular degeneration, diabetic retinopathy and glaucoma.

The publication of the work from DeepMind and Moorfields Eye Hospital in Nature this week states categorically that the algorithm performed as well as two leading retina specialists in analysing OCT scans and grading the urgency of a referral for management, with an error rate of only 5.5%. This was despite the algorithm not having access to some extra information, such as patient records, that the doctors had. The algorithm was used on two different types of OCT machines, and was also able to give confidence ratings based on aspects of the scans which it considered suggestive for diagnosis. Importantly, not a single urgent case was missed from the 14,884 scans used in the study.

Real-world application

This is just the first stage of research, although Dr Keane is confident that a final product is not too far away. DeepMind and Moorfields now need to run clinical trials of their OCT system so that doctors have the chance to test it. Mustafa Suleyman, DeepMind co-founder hopes that:

“when this is ready for deployment, which will be several years away, it will end up impacting 300,000 patients per year”

The team hopes that regulators approve a final product based on the immediate tangible benefits of a reduction in time and manpower needed to manually inspect scans, make diagnoses and refer for treatment.

Practical Benefits in the Developing World

PeekVision is a smartphone suite and includes an adapter called Peek Retina which allows the retina to be viewed with a smartphone. Credit: PeekVision.org

AI-powered screening can have an enormous impact in hard-to-reach areas. The ubiquity of smartphones around the world makes adding a portable camera and creating an image acquisition system simple and inexpensive. Already, companies such as UK-based Peek Vision have introduced camera adapters which allow high-quality images to be obtained easily and then analysed remotely.

Companies such as California-based Compact Imaging are currently working to make small form-factor multiple reference OCT (MR-OCT) available for smartphones and wearable technologies. The combination of these compact devices and AI-powered screening software could bridge geographical and economic chasms for many of the 285 million people worldwide living with some form of sight loss.

Sight loss around the world. Credit: DeepMind Health/Moorfields Eye Hospital

Artificial Intelligence elsewhere in health

These developments can act as a blueprint for the development of artificial intelligence elsewhere. DeepMind is currently doing research with University College London to assess whether AI can tell the difference between cancer and healthy tissue in CT and MRI scans. It is also working with Imperial College London to assess whether AI can interpret mammograms and improve accuracy in breast cancer screening.

In all these cases, the most practical benefit of using AI to screen for disease is one of resources — doctors’ time would be freed to spend more time with individual patients, and more time working on and providing treatments.

Pitfalls in AI’s Future

“I did everything, everything you ever asked! I created the perfect system” Clu in Tron Legacy (2010). Credit: Disney

Back to the realms of fiction, where accounts of AI are often littered with depictions of ever-evolving intelligences which strive to be perfect, such as Marvel’s Ultron or Tron’s Clu. These AIs struggle to balance a “human” rationalisation of ethics with the necessity to achieve their goal, with Earth-threatening consequences.

Though we are far from apocalypse scenarios, DeepMind itself has already been embroiled in controversy when it emerged that 1.6 million patient data records had not been adequately safeguarded when shared between London’s Royal Free Hospital and DeepMind. Data sharing agreements between the two had to be rewritten and DeepMind has also created an “Ethics & Society” group to maintain the ethical standards of AI, and ensure that social good is prioritised during the fast-moving evolution of these technologies.

Clearly, there may be obstacles ahead that no one can predict. DeepMind’s co-founder Mustafa Suleyman highlights the extent of the challenge:

“It won’t be easy: the technology sector often falls into reductionist ways of thinking, replacing complex value judgments with a focus on simple metrics that can be tracked and optimised over time….

Getting these things right is not purely a matter of having good intentions. We need to do the hard, practical and messy work of finding out what ethical AI really means.”

A future with everything to play for

Nevertheless, Suleyman describes a future which, with the right guidance, could be aided immensely by artificial intelligence when aligned with human values:

“If we manage to get AI to work for people and the planet, then the effects could be transformational. Right now, there’s everything to play for.”

A future, potentially, riding on AI. Credit: Luca D’Urbino
Share

Not Just Games – How Virtual Reality Will Heal And Teach In 2017 And Beyond

Source: http://www.huffingtonpost.co.uk/nima-ghadiri/not-just-games-how-virtua_b_13865544.html?

MARCO_PIUNTI VIA GETTY IMAGES
 

tech for good

Home Virtual Reality has seemingly been looming on the horizon for three decades, ever since VPL Research founder Jaron Lanier announced the EyePhone device which retailed for just short of $10,000. But motion sickness, and the astronomical price, pushed the technology into the wilderness.

2016-12-27-1482862519-6118231-20142F042F142F032FEyephoneVPL.99ebb.jpg
The VPL EyePhone from 1989. Image: VPL Research https://vrwiki.wikispaces.com/

This year, home Virtual Reality re-entered mainstream consciousness with no less than five new devices under the Christmas tree, ranging from premium technology room-scale experiences like HTC Vive (think Star Trek’s Holodeck) to more accessible smartphone-based products such as Google Daydream and Samsung Gear, and a price range from £12 (Google Cardboard) to £689 (HTC Vive). Facebook founder Mark Zuckerberg calls it:

The next major computing platform that will come after mobile

2016-12-27-1482859897-796878-oculus_touch.jpg
The Oculus Rift headset with the Oculus Touch hand controllers. Image: Oculus VR https://www3.oculus.com/en-us/rift/

With these new devices, it will be interesting to observe which one finds most favour in 2017: Will Facebook-owned Oculus Rift make use of its technology and social media real-estate to market its pioneering product to the masses, or will Sony’s Playstation VR deliver the fun games to make VR a genuine prospect for a world of console gamers? Will people prefer affordability and portability with Samsung’s Gear over the need for a tricky room-based setup with the HTC Vive, arguably the most immersive and wow device of all?

At its core, Virtual Reality remains a gaming technology. However all of the various devices are equally adept at providing amazing storytelling experiences, 360° music videos, or plunging the user into historic galleries and exotic worlds. Entering a famous Vincent Van Gogh painting in Borrowed Light Studios’ “The Night Café” is, in equal measures, beautiful and surreal.

2016-12-27-1482857649-8449973-nightcafe01.jpg
Borrowed Light Studios’ “The Night Café is a unique experience allowing you to walk around a Café inspired by the famous painting from Van Gogh. Image: Borrowed Light Studios http://www.borrowedlightvr.com/the-night-cafe/

As the user base is small, the market for “Triple A” gaming titles lasting hundreds of hours is minimal. The fortunate side-effect is a booming market for short experiences and quirkier games such as “Accounting” from the Crows Crows Crows Studio which are both enjoyable and affordable. But there is still the odd zombie shooter, and zombies jumping up on you in virtual reality is very, very frightening. Beyond slaughtering undead hordes, are there more constructive applications of virtual reality technology? Yes, and the possibilities are endless….

Education and Training

Virtual Reality has always been a prospect for training in professions where peoples’ lives and health are at stake. Simulators are already used to train pilots dealing with unexpected events, but the opportunity for more convenient home practice can only be a positive development.

Medical professionals can learn interventions for the operating room, emergency department or clinic, both challenging and more routine. The Miami Children’s Hospital, for example, is developing medical instructional software for basic procedures such as nasogastric tube insertion and starting an IV drip, with the aim of educating doctors as well as patients. Medical and nursing students can also learn their subject with more context than a book, for example by interacting with a “live” anatomical dissection table.

2016-12-28-1482885493-6376327-maxresdefault.jpg
Organon VR Anatomy on the Oculus Rift makes learning anatomical structures more vivid and helps the understanding of spatial relationship between them. Image: Author’s Own http://www.3dorganon.com/site/

Over the past decade, there has been an explosion in the use of simulation medicine, particularly in cardiopulmonary resuscitation and life support where it provides a much more realistic way of portraying a scenario than a mannequin and actors, and where it provides more for assessment: Gestures, eye movements, metrics and real-time feedback are more useful assessors of these skills than a written examination.

For surgical training, VR is increasingly being proposed for the learning of minimally-invasive surgical procedures. It allows a realistic operating room environment, including interaction with colleagues, but in a safer environment where mistakes are not going to cause any harm. Bimanual tool handles and force feedback can mimic surgical tools – although it will take a while for this to be accurate enough to use at home, to the level of devices such as the NeuroTouch – a commercially available brain surgery simulator. New developments in tracking technologies such as Microsoft’s Handpose (which records complex hand movements) may add more intricacy to VR surgery training.

Virtual Reality is not limited to learning practical skills. Even attributes like empathy can be augmented through the headsets. Embodied Labs has designed a VR program called “We Are Alfred” which gives medical students, many of whom are in their twenties, an insight into experiencing the life of a 74 year-old man with audiological and visual problems. Providing the experience and learning the patient’s perspective can be invaluable in how the students develop as medical professionals.

2016-12-28-1482885605-2550107-family1screencap.png
Embodied Labs’ “We Are Alfred” allows students to experience the life of a 74 year-old with visual and audiological problems. Image: Embodied Labs http://www.embodiedlabs.com/

The complete audiovisual immersion provided by Virtual Reality can conjure or awaken emotions which users may not have been aware of. This has been used to good effect in journalism, such as The Guardian’s award-winning 6×9 experience of solitary confinement and film-maker Nonny de la Pena’s Project Syria which inserts the viewer into the plight of Syrian child refugees.

The founder of Oculus VR Palmer Luckey, affirms that this immersive power of Virtual Reality can be a medium for social change, through its ability to put you in places “in a much more real way”. Virtual reality campaigns following the 2015 Nepal Earthquake and The Dolphin Project (Both Huffington Post’s RYOT) have had a palpable impact on awareness. A recent AT&T campaign in the United States sees the user driving a car through residential neighbourhoods and ending up causing a tragedy whilst texting. The feedback following this emotive exercise showed that people who had the VR experience consciously removed their phones before driving.

New ways of working

Meetings and conferences are set to transform from the often vaguely distant videoconference to VR meetings. The social games and experiences on this years’ devices have highlighted how much more connected people feel when they are in a shared VR space than a 2-dimensional screen. Be it event/product marketing, site inspections, interviews or negotiations, the presence and interactivity afforded by Virtual Reality will see it slowly replace Skype as “the next best thing” to physical contact.

In the field of surgery, earlier this year, Dr Shafi Ahmed performed bowel surgery which was live-streamed for anyone to “jump into the operating theatre”. With improvements in haptic devices, performing remote surgery may become a reality sooner than we would have expected.

2016-12-28-1482885656-53530-MG_1683Reduced1030x687.jpg
Surgery streamed in Virtual Reality. Image: Medical Realities http://www.medicalrealities.com/

Tests and Therapies

Diagnosis of illness lends itself to Virtual Reality, particular for conditions in which visual assessment is important such as Parkinson’s Disease and Multiple Sclerosis. A test developed by Tomsk Polytechnic and Siberian State Universities in Russia uses cheap headsets and Microsoft Kinect sensors to monitor people’s movements in virtual environments. Neurologists are then able to make earlier diagnoses and start rehabilitation. Indeed, the ability to see a doctor face-to-face from distance was one of the motivations for Facebook’s Mark Zuckerberg when he purchased the Oculus VR company.

2016-12-28-1482885883-7519024-30329ba99e4e7_680_382.jpg
Virtual Reality being used to diagnose Parkinson’s Disease. Image: Tomsk Polytechnic University http://tpu.ru/en/news-events/914/

Oculus founder Palmar Luckey first identified the potential of VR in rehabilitation whilst working with a team of therapists at the University of Southern California, helping military veterans who suffered from post-traumatic stress disorder by providing exposure therapy. This involved recreating battlefield episodes but allowing the guidance and support of a trained therapist to help identify and address the psychological trigger for illness. He said that this was when he first identified that virtual reality can be an important new therapy method:

It can make a significant difference in people’s lives

Virtual reality has been shown to be beneficial in rehabilitation after strokes, particularly in improving upper limb strength and in re-learning activities of daily living. There have also been potential benefits observed in helping motor learning in children with cerebral palsy. It can become a key weapon in the rehabilitation arsenal for physiotherapists and occupational therapists over the next decade, changing the face of rehabilitation for all kinds of diseases.

Virtual Reality is no longer in its infancy. It has hit the mainstream and is here to stay. Let’s use it to improve our world.

Share