Algorithmic surveillance to prevent serious failures in oncology

ESTHER PANIAGUA

Lorena Jaume-Palasí is co-founder of the NGO AlgorithmWatch, a non-profit devoted to reveal problematic Automated Decision Making (ADM) processes, algorithm biases and software failures that might have “enormous” consequences in cancer diagnosis, forecast and treatment. 

Lorena Jaume-Palasi (Photo by Heinrich-Böll-Stiftung creative commons licence 2.0)

One year ago, ProPublica uncovered the case of New York City’s crime lab technique to analyse especially difficult DNA samples under the microscope was being questioned in court by scientists. As this method was used with thousands of criminal cases the impact could be enormous. Why did it happen? Could it be the case for genetic tests used in oncology? “Of course it is possible”, answers Lorena Jaume-Palasí, whose Berlin based NGO AlgorithmWatch claims to change procedures so to warranty a double check of this kind of software, specially when used for medical purposes.

Not so personalized

“This type of software mistakes has a huge impact. If there are glitches, the patients that will be inaccurately identified as adequate for a specific therapy and the margin of error will become bigger with every step within the algorithmic system, so the allegedly ‘personalized medicine’ will be everything but personalized”, Jaume-Palasí explains. The origin of the problem –she continues is that mathematical formula behind a software is often not properly applied, meaning that the software does not translate the formula with a 100% of accuracy. The consequence in the case of diagnostic systems is having false positives and false negatives. “The algorithmic formula it’s a mathematical formula and mathematics are not part of the curricula of people studying informatics in western countries. They are creating software that has algorithms -or translating algorithms into code- without really understanding what the mathematical formula means”, she says.

AlgorithmWatch asserts that these processes require people that are both able to understand code and mathematics and to check whether what the software is producing corresponds to the mathematical formula. Thirdly, they require genetics specialists with knowledge on statistics so to be able to make sense of the output that is being given, as in the NYC scandal it was a biologist who first ran the bell”.

Jaume-Palasí has also realised – by looking at very simple atomization systems in the health sector – that there are very specific parts of the process where both engineers and health professionals have not think out of the technology use. “Conversations happening now are starting to be slightly more holistic and trying to understand and to evaluate the first considerations made with regards to the human-machine interaction in the 80’s and in the 90’s when the first echocardiogram and other imaging technology machines were introduced into health systems. Those decisions were very one sided and a rethinking is needed”, she says. 

Approximation of truth

AlgorithmWatch also points out that, when physicians are using this screening tools, the results that are taken for granted and not interpreted. “Those machines don’t simply show an image: they give an approximation of specific parts of the body and show whether there are specific problems or diseases. But all machines are pretty sensitive they will error on the side of caution so on many occasions doctors need to interpret the results”, Jaime-Palasí says. “Many of the results in many cases are false positives but there are specific results were the human, the manual calculation, is very difficult. So in many cases what this means is that this approximation is not evaluated by the physician and inaccurate results are taken for granted”, she continues. This might lead – she adds – to overseeing specific diseases or misleading possible changes of medication needed.

Jaime-Palasí adds another concern in regards to taking results for granted: the so called “uncertainty bias”. This means that, if there is a specific population type that constitutes only a 30% of the population in a dataset and the other type constitutes the other 70%, approximations on that 30% are going to be less. “Technologies like Predict or Adjuvant” oncology tools that calculate survival estimates, treatment benefits for hormone therapy and chemotherapy, and mortality reductions in cancer) are very good for a specific set of the population but studies show that the accuracy is reduced when population is under 40 years of age. 

AI for classification

The home page of AlgorithmWatch

But it’s not all bad news. According to Jaume-Palasí, cutting-edge intelligent systems are starting to show their utility not for predictions but for classification and for identification of patients with undetected rare cancers. “Specifically in oncology, this technology is leading physicians to better diagnosis because what it can do well is classifying and finding details that an expert eye is not able to identify”, she says.

“We need to be realistic about what those technological tools can do though. They do not understand what ‘cancer’ or ‘tumour’ means. They are being used in hospitals to some sort of automatization in diagnoses – which could be wrong – instead of training physicians to understand how to use them for a better diagnostic process, which we think is the proper way”, the co-founder of AlgorithmWatch says [as reported also by Fabio Turone in The trouble with health statistics, in this issue of Cancer World +]. 

To conclude, Jaume-Palasí remarks the need to upgrade medical software. Radiology machines use a very simple software but their performance could be much better – she says – by incorporating artificial intelligence based machine learning algorithms so to learn from the physician’s feedback when analysing the results coming from this software. “The problem is that those improvements are not always profitable for the health-technology companies so they don’t have incentives to implement them”, she ensures. That is the dictatorship of the market.

Be the first to comment

Leave a Reply

Your email address will not be published.


*