Learn from the Scarlet Spider — Think Uveitis


Ben Reilly — The Scarlet Spider. Well-intentioned but flawed.

Marvel’s Scarlet Spider is an interesting narrative experiment on the nature of soul. Ben Reilly is a clone of the original Peter Parker who shares the physical traits of the famous webslinger. Through vaguely-described “arcane magic”, he even gets an imprint of Parker’s memories.

Clone Wars: The Scarlet Spider fighting with Spider-Man. Source: Marvel Comics

However, as he does not share the many responsibilities of his more famous clone, Ben Reilly struggles with true purpose in life. Possessing the same intelligence and inquisitive mind as Peter Parker, his desire to discover and experiment leads him to make some bad choices, which have led him to die and be resurrected more times than is habitual for a Marvel Comics character.

As a result, Ben has flirted the line between heroism and villainy, seemingly resolved to entropy as an antihero. Even when he wants to do the right thing, his virtuous intentions backfire. In a recent storyline (written by Peter David, pencilled by Will Sliney and coloured by Rachelle Rosenberg) during which demons take over Las Vegas, Ben makes an assumption we can all be guilty of when seeing red eyes:

“Is this creature possessed by some sort of satanic entity?”

Ben Reilly: The Scarlet Spider. Issue 16 (David, Sliney, Rosenberg)

Ben has been very presumptuous here. In a Las Vegas which is arguably more satanically inclined that usual, he has naturally assumed that anyone with red eyes is a demon.

He hits this man with the force of his radioactively enhanced jab, and wasn’t prepared for what came next…

Jimmy’s family are angry and upset with the Scarlet Spider

“But… he has red eyes”. Ben struggles when explaining his actions to Jimmy’s wife and daughter. It dawns on him that he has jumped to conclusions.

Jimmy’s wife explains that he has Uveitis, and has clearly read the patient information leaflet “It’s an inflammation of the middle layer of the eye!” The episode finishes with Ben receiving a well-deserved kick in the shin from Jimmy’s young daughter.

Learn from the Scarlet Spider. Think Uveitis

Whether you are the well-intentioned but haphazard clone of a popular superhero or not, it’s always worth thinking about Uveitis as a cause of red eye. Uveitis literally means inflammation of the uvea — the middle layers of the eye and can have a huge number of causes including trauma, various infections and autoimmune diseases (where the body attacks itself). In many cases, no cause is ever found.

Uveitis can have different patterns — it can come and go with no problems in between, or can flare up many times and cause a lot of discomfort and anxiety for the sufferer. Jimmy may well have had a number of flares before, and his current treatment could be anything from steroid eyedrops to tablets or injections, all with the aim of reducing the inflammation in the eye.

Jimmy lives in the United States of America, where an estimated 300,000 people are affected by Uveitis each year. In many cases, anterior uveitis may be misdiagnosed as a bacterial conjunctivitis, which has a different treatment of antibiotic drops or ointment.

Jimmy has both red eyes

In this panel, Jimmy appears to have acute anterior uveitis — the form which affects the front of the eye, either the iris (iritis) or the ciliary body (iridocyclitis). Together with a red eye, he may well be suffering from symptoms such as blurred vision and light sensitivity(photophobia). The latter can be very hard for Uveitis sufferers to deal with, and even more so considering the fluorescent lighting which adorns the City of Lights. Jimmy is wearing polarized sunglasses to help him manage as he navigates the Vegas strip:

Given that his eyes are so red, Jimmy is likely to have only recently had a flare of his disease, and may have only just started treatment. Both eyes are red rather than just one, which makes it more likely that he has an underlying “systemic disease” disease. In addition, the whole of his eyes are red, which isn’t a typical pattern seen in anterior uveitis where the classic appearance of redness is immediately surrounding the iris:

Classic “Ciliary Flush” image of Iritis with redness surrounding the iris where the cornea and sclera meet, an area called the “limbus”.

However, there are other causes why his eye might be red whilst he is being treated for Uveitis, which might include:

  1. Episcleritis/Anterior Scleritis — He may have these conditions as a co-diagnosis (alongside his Uveitis). These diseases reflect inflammation of blood vessels in different layers of the white of the eye, and can be associated with an underlying disease. Jimmy might have Lupus, Inflammatory Bowel Disease or a number of other full-body disease which can be linked to different problems in the eye.
  2. Glaucoma — Jimmy may have uveitic glaucoma , raised pressure in the eye caused by inflammation (from uveitis) obstructing and damaging the structures in the eye which allow for outflow of the aqueous humour. the sudden eye pressure rise would cause red However, Jimmy would likely feel very sick and in too much pain to verbally joust with our friendly neighbourhood spider clone.
  3. Allergy — If Jimmy has been taking eye drops for his Uveitis, they may contain a preservative which he is allergic to. I hope that his Ophthalmologist switches him to a preservative-free formula.

One can only hope that Ben Reilly, our Scarlet Spider, learns from this lesson and doesn’t make presumptions about peoples’ eyes, even in the context of a satanic takeover of Sin City.

Share

Dr A.I. will see you now — The age of Artificial Intelligence in Healthcare

Google DeepMind’s breakthrough might help save the sight of millions around the world.

Rutger Hauer playing Roy Batty in Blade Runner (1982). The Lead Replicant describing his life as both rain and tears flow down his face. Credit: Warner Bros

“I’ve seen things you wouldn’t believe…”

Had he spent more time scrutinising millions of Optical Coherence Tomography (OCT) scans rather than attack ships on fire off the shoulder of Orion, perhaps Dr Roy Batty might have been the most eminent Medical Retina specialist of Ridley Scott’s fictional 2019.

Philip K. Dick’s dystopian vision was penned in 1968 and later adapted into a neo-noir masterpiece by Scott in 1982. The synthetic beings of his tale, outwardly identical to adult humans, have been created in order to replace humans in performing menial or undesirable jobs. Their only deficiencies seemingly being a lack of emotional range and a four-year life span. The themes of humanity and identity continue to resonate despite the decades which have passed since the short story was written.

We are still a long way from androids replacing any profession, let alone doctors or nurses. Nevertheless, a potentially monumental triumph in the application of AI technology in medicine has just materialised, the fruits of which might benefit millions worldwide.

Two London-based teams have collaborated to develop AI technology which can analyse OCT retinal scans and detect a number of eye conditions, then triage those patients who are in need of urgent care. Google’s DeepMind team, spearheaded by Jeffrey De Fauw, have applied a neural network learning system which matches highly experienced doctors and reduces sight loss by minimising the time between detection and treatment. This delay in referral for treatment still causes many people to go blind.

The potential AI-enhanced process to detect eye disease. Credit: DeepMind Health/Moorfields Eye Hospital

Pearse Keane, lead clinician for the project at Moorfields Eye Hospital, describes DeepMind’s algorithm:

“As good, or maybe even a little bit better, than world-leading consultant ophthalmologists at Moorfields in saying what is wrong in these OCT scans”

Artificial Brains — From Chess to Go

Google’s DeepMind, founded in 2010 in the UK and later acquired by Google, seeks to build powerful general-purpose learning algorithms and uncover the mystery of intelligence. Thus-far, its greatest tangible successes had been in defeating humans in games.

Perhaps its landmark gaming victory came in 2016 when DeepMind’s AlphaGo beat high-ranked Go player Lee Sedol 4–1 in a five-game match by using a supervised learning protocol, watching and analysing large numbers of games between humans. Despite the resounding triumph of machine over man in the ancient strategy board game, DeepMind has to thank its ancestor, IBM’s Deep Blue, for the first of such victories.

Garry Kasparov playing chess against IBM’s Deep Blue in 1997. Credit: Peter Morgan/Reuters

In 1996, world chess champion Garry Kasparov beat Deep Blue 4–2. One year later, Deep Blue came back for revenge and beet Kasparov 3½–2½. The message was clear, artificial intelligence was catching up the human intelligence. Yet Deep Blue’s algorithm depended on “brute computational force”, evaluating millions of positions. That works fine for chess, in which there are 20 possible opening moves. Go, a game originating in China almost 2500 years ago, has 361 possible opening moves on its 19×19 grid. It is so large that no AI can currently explore every possibility using Deep Blue’s “brute force” method.

LeeSeDol losing to DeepMind at Go. Credit: Korea Baduk/Reuters

DeepMind’s AlphaGo, on the other hand, works on a combination of different elements which are meant to mimic human decision-making. The algorithm was developed by DeepMind co-founder Demis Hassabis and consists of a number of phases which include supervised learning (being trained by analysing games between human experts), reinforcement learning (playing itself millions of times and maximising expected winning outcomes), “intuition” rollout policy (predicting how a human would play), Value network learning (quantifying the chances of success) and an algorithm which brings all these together called a “Monte Carlo tree search”.

The Age of Scans

A practitioner performing an OCT scan. Credit: Moorfields Eye Hospital

It’s all very good to beat humans at chess or Go, but what about diagnosing diseases? To find a real-world application for the human-like decision making used by DeepMind’s AlphaGo, the team at the company’s Health division looked at Optical Coherence Tomography (OCT) scans. This is a form of three-dimensional eye imaging which slices the retina into different layers, first introduced over two decades ago. OCT machines have come a long way since their inception and have become increasingly complex in how data is generated and presented. Nevertheless, they are used routinely by eye doctors to diagnose diseases such as age-related macular degeneration, diabetic retinopathy and glaucoma.

The publication of the work from DeepMind and Moorfields Eye Hospital in Nature this week states categorically that the algorithm performed as well as two leading retina specialists in analysing OCT scans and grading the urgency of a referral for management, with an error rate of only 5.5%. This was despite the algorithm not having access to some extra information, such as patient records, that the doctors had. The algorithm was used on two different types of OCT machines, and was also able to give confidence ratings based on aspects of the scans which it considered suggestive for diagnosis. Importantly, not a single urgent case was missed from the 14,884 scans used in the study.

Real-world application

This is just the first stage of research, although Dr Keane is confident that a final product is not too far away. DeepMind and Moorfields now need to run clinical trials of their OCT system so that doctors have the chance to test it. Mustafa Suleyman, DeepMind co-founder hopes that:

“when this is ready for deployment, which will be several years away, it will end up impacting 300,000 patients per year”

The team hopes that regulators approve a final product based on the immediate tangible benefits of a reduction in time and manpower needed to manually inspect scans, make diagnoses and refer for treatment.

Practical Benefits in the Developing World

PeekVision is a smartphone suite and includes an adapter called Peek Retina which allows the retina to be viewed with a smartphone. Credit: PeekVision.org

AI-powered screening can have an enormous impact in hard-to-reach areas. The ubiquity of smartphones around the world makes adding a portable camera and creating an image acquisition system simple and inexpensive. Already, companies such as UK-based Peek Vision have introduced camera adapters which allow high-quality images to be obtained easily and then analysed remotely.

Companies such as California-based Compact Imaging are currently working to make small form-factor multiple reference OCT (MR-OCT) available for smartphones and wearable technologies. The combination of these compact devices and AI-powered screening software could bridge geographical and economic chasms for many of the 285 million people worldwide living with some form of sight loss.

Sight loss around the world. Credit: DeepMind Health/Moorfields Eye Hospital

Artificial Intelligence elsewhere in health

These developments can act as a blueprint for the development of artificial intelligence elsewhere. DeepMind is currently doing research with University College London to assess whether AI can tell the difference between cancer and healthy tissue in CT and MRI scans. It is also working with Imperial College London to assess whether AI can interpret mammograms and improve accuracy in breast cancer screening.

In all these cases, the most practical benefit of using AI to screen for disease is one of resources — doctors’ time would be freed to spend more time with individual patients, and more time working on and providing treatments.

Pitfalls in AI’s Future

“I did everything, everything you ever asked! I created the perfect system” Clu in Tron Legacy (2010). Credit: Disney

Back to the realms of fiction, where accounts of AI are often littered with depictions of ever-evolving intelligences which strive to be perfect, such as Marvel’s Ultron or Tron’s Clu. These AIs struggle to balance a “human” rationalisation of ethics with the necessity to achieve their goal, with Earth-threatening consequences.

Though we are far from apocalypse scenarios, DeepMind itself has already been embroiled in controversy when it emerged that 1.6 million patient data records had not been adequately safeguarded when shared between London’s Royal Free Hospital and DeepMind. Data sharing agreements between the two had to be rewritten and DeepMind has also created an “Ethics & Society” group to maintain the ethical standards of AI, and ensure that social good is prioritised during the fast-moving evolution of these technologies.

Clearly, there may be obstacles ahead that no one can predict. DeepMind’s co-founder Mustafa Suleyman highlights the extent of the challenge:

“It won’t be easy: the technology sector often falls into reductionist ways of thinking, replacing complex value judgments with a focus on simple metrics that can be tracked and optimised over time….

Getting these things right is not purely a matter of having good intentions. We need to do the hard, practical and messy work of finding out what ethical AI really means.”

A future with everything to play for

Nevertheless, Suleyman describes a future which, with the right guidance, could be aided immensely by artificial intelligence when aligned with human values:

“If we manage to get AI to work for people and the planet, then the effects could be transformational. Right now, there’s everything to play for.”

A future, potentially, riding on AI. Credit: Luca D’Urbino
Share

Seeing Things

An Art Exhibition inspired by the hallucinations of Charles Bonnet Syndrome

Visual Hallucinations

It was a pleasure to attend the first day of the Seeing Things interactive art exhibition, which is taking place at the lovely Forum in Norwich over the next two weeks.

Charles Bonnet Syndrome is a type of visual hallucination which people can experience after sight loss. In comparison with other types of hallucination, those who experience these know that they are just a creation of the brain as a reaction to visual loss.

 Fascinating paintings depicting some of these hallucinations. Source: Own photo

 

The Art of Charles Bonnet

This art exhibition, set up by the NNAB, features art from people who suffer from the syndrome, as well as other visual artists who have been inspired from speaking to those who experience these vivid hallucinations, which have their own unique attributes in comparison with other types of hallucinations.

To the left – a bear statue in front of some upside-down cupcakes. Strange faces can also be a feature of the condition. Source: Own photo

Experiences

Dominic Ffytche, a world expert in the condition, gave a fantastic lecture about the Syndrome, and it was indeed fascinating to listen to the experienced of sufferers from the condition. The audience comprised of people who suffered from the condition, people who had not previously heard of the syndrome and clinicians, such as myself, who are aware about the condition but want to understand more and gain perspective.

I was particularly intrigued by the number of people who experience hallucinations of old period clothing from different eras, which seems to be a consistent feature of the syndrome. Interestingly, even when the syndrome was first described 250 years ago — the literature describes sufferers talking about people wearing period dress of the time. Perhaps 18th century formal-wear has a hallucinatory quality to it?

 Dr Dominic Ffytche, an expert in the condition, shows images of certain visual hallucinations that people experience. Source: own photo

Gaps

Even though Charles Bonnet Syndrome was first described 250 years ago, by a Swiss philosopher who was writing about his grandfather’s experiences having lost his sight to cataracts, we still do not know why exactly it happens. Certainly, we suspect that the brain fills in the gaps generated from visual loss by producing new fantastic pictures or old images which it might have stored. For many people, these hallucinations are not a problem but for some they can, understandably, be distressing. Certainly, it helps to understand these hallucinations and it is useful for both sufferers, the public and clinicians (such as yours truly) to be aware and understand this fascinating condition.

 An interesting hallucination — bear and inverse cupcakes. Source: own photo

To this end, it is fantastic to have an art exhibition which both raises awareness and bewitches us, humbling us as clinicians into realising there is still so much about the eyes and the brain that we don’t yet understand. Do you have any experience of this condition? Please feel free to comment below.

Links:
Royal National Institute of Blind People
Norfolk and Norwich Association for the Blind
NHS Charles Bonnet Syndrome Information

Share

The Eye in Mixed Reality: Using Microsoft’s Hololens to learn about the eye!

I think Mixed Reality (a hybrid of real world and virtual reality to make new environments and visualizations where real objects and digital objects co-exist and interact in realtime) is particularly useful for a subject like Ophthalmology in which the context of structures can be difficult to grasp.  It has only just come out, but watch this space for new ways to learn anatomy and surgical skills!

Share

The Physiology of the Eye: A fun way for students and patients to learn about the eye!

As an ophthalmology doctor, it is great to see these kinds of applications which can help medical and optometry students, alongside patients and the public understand how eye structures function and interact with each other. I enjoyed going through the application, playing with the well-constructed models and doing the challenging quizzes at the end of each section. Recommended for anyone who wants to learn more about The Eye.

 

Share