Concerns About AI-Enabled Medical Devices
Early Recalls Raise Concerns About AI-Enabled Medical Devices
Artificial intelligence–enabled medical devices (AIMDs) are reshaping U.S. healthcare. They are promising faster diagnoses, improved decision-making, and expanded patient access to advanced technologies. But with rapid innovation comes real risk. A new study published in JAMA Health Forum (Aug. 22, 2025) sheds light on the safety challenges these devices pose—particularly the role of limited clinical validation and manufacturer incentives in fueling early recalls.
At KBD Attorneys, our team is closely monitoring these developments. As advocates for patients injured by unsafe or defective devices, we believe studies like this are critical for understanding where regulation and oversight must evolve. We’ve written previously about the risks tied to medical devices in genera. This latest research underscores why vigilance matters.
The Study: Linking Validation Gaps to Early Recalls
The cross-sectional study analyzed 950 FDA-cleared AIMDs and matched them to recall events between November 15 and November 30, 2024. Here’s what the researchers found:
-
6.3% of devices (60 total) were recalled, accounting for 182 recall events.
-
Many recalls were significant in scale—one category alone involved more than 935,000 affected units.
-
The most common causes were:
-
Diagnostic or measurement errors (109 recalls)
-
Functionality delays or failures (44 recalls)
-
Physical hazards such as device breakage (14 recalls)
-
Biochemical hazards (13 recalls)
-
What stood out most: 43% of all recalls occurred within the first year after FDA clearance—nearly double the early recall rate for all medical devices cleared through the FDA’s 510(k) process.
This is no small issue. Early recalls shake clinician and patient confidence in AI tools and suggest that some products are reaching the market long before they’ve been adequately tested in real-world conditions.
Why Validation Matters
A core finding of the study is the strong relationship between lack of clinical validation and higher recall risk:
-
Devices without validation averaged 3.4 recalls per device, compared to fewer than 2 recalls per device with retrospective or prospective validation.
-
Unvalidated devices were tied to larger recalls, with tens of thousands more units pulled from the market compared to validated counterparts.
This could reflect a critical weakness in the FDA’s 510(k) clearance pathway, which allows devices to be approved based on “substantial equivalence” to previously cleared products, often without prospective human testing.
As the authors note, requiring prospective evaluation—or issuing time-limited clearances that expire without follow-up data—could help mitigate early safety failures.
Public vs. Private Manufacturers: An Added Risk Factor
The study also highlighted how the type of manufacturer influences recall risk:
-
Public companies accounted for over 90% of recalls and nearly all recalled units.
-
Smaller public companies were especially likely to release devices lacking clinical validation.
-
Private companies had lower recall rates and were more likely to conduct validation testing before launch.
Why the disparity? The researchers suggest that investor pressure on public companies may drive faster, less cautious rollouts. With market competition and quarterly reporting cycles at play, some manufacturers may cut corners on testing in order to launch sooner.
This raises an important policy question: Should regulatory standards for AI-driven devices account for corporate structure and incentives?
Implications for Patients and Providers
For clinicians, the study’s findings underscore the importance of scrutinizing validation data before integrating AI devices into practice. For patients, they highlight the risks of assuming FDA clearance equals rigorous testing. In the case of AIMDs, clearance often means the opposite—limited premarket evaluation and a reliance on postmarket performance data.
Perhaps most concerning: At the close of the study, 59% of recalls remained unresolved, and some had been open for more than three years. This suggests not only that devices are entering the market too quickly, but also that remediation is slow—leaving patients exposed to ongoing risks.
Why This Matters for Our Practice
At KBD, we’ve long represented individuals harmed by defective medical devices. AI-driven products bring new complexities:
-
Injuries may stem from invisible algorithmic errors rather than mechanical failures.
-
Responsibility may be obscured when multiple entities contribute to device design, training data, and deployment.
-
Patients may not even know they were harmed by an AI-enabled device until long after use.
The legal system is still adapting to these challenges, but our firm is at the forefront of analyzing how liability principles apply to this new generation of devices. We have been investigating AI platforms and devices, and will continue to navigate how AI devices affect everyone. people
Moving Forward: The Call for Stronger Oversight
The study authors recommend several policy reforms that align with our advocacy work:
-
Require prospective clinical testing for AI devices before market entry.
-
Implement time-limited clearances that expire without confirmatory performance data.
-
Enhance postmarket surveillance, similar to risk-based pharmacovigilance models used for drugs.
These steps could help ensure that patients reap the benefits of AI innovations without becoming test subjects for under-validated technology.
Conclusion
Artificial intelligence is transforming medicine, but this JAMA Health Forum study is a sobering reminder that innovation must not outpace safety. Too many AI-enabled devices are entering the market without adequate testing, and recalls—especially early recalls—pose real risks to patients.
At KBD we remain committed to tracking these developments, advocating for stronger oversight, and representing individuals harmed by unsafe devices. As AI becomes more deeply embedded in healthcare, vigilance from regulators, providers, and legal advocates alike will be essential to protecting patients.
Contact us if an AI device has harmed you.


