When I was a PhD student, around 20 years ago, some folks in my lab were working on visualisation for CT and scan data. CT scans take a load of cross-sectional images and the traditional way of looking at them is to scan through one slice at a time. This needs a lot of training because it's not how the human visual system evolved to see things.
Some folks in my lab were working on using techniques from volumetric rendering (a CT scan is basically a volumetric data set) to improve this. They had some demos at the time (using real CT scan data) that could:
- Give you a 3D image that you could rotate or zoom the images.
- Use isosurfacing to remove contiguous blocks of identical tissue, so you could remove skin, bone, and so on from the image and just see the organ that you were interested in.
- Use similar techniques to apply false colour to highlight things (e.g. seeing blood in a different colour to blood vessels). This included translucency, so you could make different kinds of tissue translucent.
At the time, this needed a fairly beefy desktop GPU. Today, the exact same code would run on an iPad without warming it up too much.
So I was incredibly disappointed when I saw a specialist looking at a CT scan in hospital a few weeks ago and they were still doing the scan-through-slices visualisation.
When someone talks about how 'AI will revolutionise health care', remember that there are old bits of well-understood IT that are not deployed in the health profession even after feedback from clinicians saying that it would definitely make their lives easier. Even getting records digitised so hospitals have instant access to patients' medical history is still not completely finished and that's based on 1960s technology.