It depends on the indication for the scan. Some indications do not require contrast, others MUST have contrast in order to have any value. If you refuse contrast without understanding the reason, you may be simply wasting your time and money.
I think you must have misunderstood where the artifact was coming from. Gadolinium retention has been shown to occur, but has not been reliably linked to any clinical symptoms. Gadolinium tissue retention also does not interfere in interpretation.
I agree with this sentiment. I have always wished, maybe naively, for the type of computing environment that makes possible things you see in sci-fi movies and shows, where someone can simple "route all power to the forward lazers!" or "use the power cells from your rifle to keep life support systems online!" This imaginary world where technological components are trivially interchangeable, compatible, reusable. My impression is that if you even asked a smartphone hardware engineer to replace a broken iPhone camera with a leftover working camera from an Android phone that, at best it would be an extraordinarily difficult task, and at worst, just may not be possible.
The issue is that in medicine, much like automobiles, unexpected failure modes may be catastrophic to individual people. “Fixing” failure modes like the above comment is not difficult from a technical standpoint, that’s true, but you can only fix it once you’ve identified it, and at that point you may have a dead person/people. That’s why AI in medicine and self driving cars are so unlike AI for programming or writing and move comparatively at a snails pace.
Yet self-driving cars are already competitive with human drivers, safety-wise, given responsible engineering and deployment practices.
Like medicine, self-driving is more of a seemingly-unsolvable political problem than a seemingly-unsolvable technical one. It's not entirely clear how we'll get there from here, but it will be solved. Would you put money on humans still driving themselves around 25-50 years from now? I wouldn't.
These stories about AI failures are similar to calling for banning radiation therapy machines because of the Therac-25. We can point and laugh at things like the labeling screwup that pjdesno mentioned -- and we should! -- but such cases are not a sound basis for policymaking.
> Yet self-driving cars are already competitive with human drivers, safety-wise, given responsible engineering and deployment practices.
Are they? Self driving cars only operate in a much safer subset of conditions that humans do. They have remote operators who will take over if a situation arises outside of the normal operating parameters. That or they will just pull over and stop.
I've never been in a self-driving car myself, but your position verges on moon-landing denial. They most certainly do exist, and have for a while.
Yes, they still need human backup on occasion, usually to deal with illegal situations caused by other humans. That's definitely the hard part, since it can't be handwaved away as a "simple" technical problem.
AI in radiology faces no such challenges, other than legal and ethical access to training data and clinical trials. Which admittedly can't be handwaved away either.
Poorer performance in real hospital settings has more to do with the introduction of new/unexpected/poor quality data (i.e. real world data) that the model was not trained in or optimized for. They still do very well generally, but often do not hit equivalent performance to what is submitted to the FDA, or in marketing materials. This does not mean they aren’t useful.
Clinical AI also has to balance accuracy with workflow efficiency. It may be technically most accurate for a model to report every potential abnormality with associated level of certainty, but this may inundate the radiologist with spurious findings that must be reviewed and rejected, slowing her down without adding clinical value. More data is not always better.
In order for the model to have high enough certainty to get the right balance of sensitivity and specificity to be useful, many many examples are needed for training, and with some rarer entities, that is difficult. It also inherently reduces the value of the model it is only expected to identify its target disease 3 times/year.
That’s not to say advances in AI won’t overcome these problems, just that they haven’t, yet.
While a lot of this rings true, I think the analysis is skewed towards academic radiology. In private practice, everything is optimized for throughput, so the idea that most rads spend less than half of their time reading studies i think is probably way off.
As a radiologist and full stack engineer, I’m not particularly worried about the profession going away. Changing, yes, but not more so than other medical or non-medical careers.
litevna.app - a DICOMweb compatible medical imaging archive, built on cloudflare workers, to optimize fast image delivery globally. Images are all encoded as HTJ2K for progressive image loading, and the popular OHIF zero-footprint DICOM viewer is built in.
Building mainly to power the next generation of pacsbin.com, but may offer as a standalone service as well.
I use JWTs to let me do auth on cached resources. I can verify permissions in an edge worker and deliver the cached resource without needing to roundtrip to the database. Not sure how to implement that without JWT (or rolling my own solution). Lots of people here saying some version of “I don’t see the use case, just use X”, but these kinds of standards nearly always arise as a result of a valid use case, even if they aren’t as common.