top of page
Search

FDA approves AI test for prostate screening. No tuchus exam.

  • snitzoid
  • 5 days ago
  • 2 min read

FDA just signed off on AI predicting long-term cancer outcomes

The ArteraAI prostate test uses artificial intelligence and biomarkers to predict if standard hormone therapy will be effective and for how long.

By Jennifer Ortakales Dawkins, Quartz Media

Published 19 hours ago


The Food and Drug Administration has approved artificial intelligence as a way to predict some cancer outcomes — resulting in yet another win for AI in the medical space.


The agency issued a de novo authorization to the ArteraAI Prostate test, an AI test used to personalize cancer therapy and predict treatment outcomes. The software reportedly detects cancer with 84.7% accuracy, while physicians who tried to detect cancer manually fell between 67.2 percent and 75.9 percent.


The new classification makes the test the first AI software to predict long-term cancer outcomes authorized by the FDA.


Artera is one of several technology companies vying for artificial intelligence to improve accuracy in diagnosing cancer and determining treatment. Companies like Freenome, CureMetrix, and PathAI focus on diagnosis and early detection, while companies like Immunai and Anima Biotech focus on treatment.


Tech companies outside of the medical space are also trying to get in on the trend. Apple has developed features for its watches to detect heart arrhythmias and Nvidia is working with Medtronic and Johnson & Johnson to build out their AI devices and software.


The FDA has approved a growing number of AI medical devices in the last decade, and 950 of them with AI features between 1995 and 2024, MedTech Dive reported. The boom has been driven largely by more investments into AI and machine learning, as well as more connected devices, and a growing familiarity with how software is regulated as a medical device, according to the publication.


However, while AI can reduce medical errors in some capacities, it can also lack accuracy in other areas, raising concerns about medical responsibility and data reporting.


An October report in the NPJ Digital Medicine Journal found that machine learning and artificial intelligence models in healthcare may exacerbate health biases. Researchers reviewed 692 FDA-approved AI medical devices, examining transparency, safety reporting, and sociodemographic representation. They found that FDA reporting data was inconsistent and exacerbated the risk of algorithmic bias and health disparity.


Companies may also face regulation on the state level. Illinois became the latest to restrict the use of artificial intelligence in therapy, following Nevada and Utah, as at least three other states consider their own restrictions on the technology. Illinois is prohibiting the use of AI to “provide mental health and therapeutic decision-making,” according to a state press release. However, licensed behavioral health professionals can still use the tech for administrative and supplementary support services.

 
 
 

Recent Posts

See All

Comments


Post: Blog2_Post
  • Facebook
  • Twitter
  • LinkedIn

©2021 by The Spritzler Report. Proudly created with Wix.com

bottom of page