Quantcast
Channel: SpicyIP
Viewing all articles
Browse latest Browse all 2952

AI/ML Medical Devices, Regulation and Sunrise in the West

$
0
0

Image from here

Policy makers have often been caught off guard with new age technology. Technology emerges and evolves rapidly and regulations are slow to catch up. This cat and mouse game continues with another fast emerging and disruptive technology – AI / ML based medical devices.

The US and EU are seeing waves of regulatory and policy level curiosity in AL / ML medical devices.

The timing for these interventions also seems right. While many devices have been launched, this technology is still nascent. AI / ML medical devices have not yet found a permanent place in hospital workflows. The US FDA has approved around 340 AL / ML based medical devices and that the market for these devices is only going to expand over time. It looks like the US is interested in ensuring early regulation of this ‘sunrise’ sector and calls are being made for increased transparency and clarity.

According to some studies, India still imports approximately 70% of its medical devices and does not locally manufacture too many. If reliance on import of medical devices, especially AI / ML medical devices were to continue, there are several concerns that regulators must address and address soon. While unregulated imports are a concern as India may well become a dumping ground or a testing ground for these nascent machines, regulation of home-grown devices is equally important.

Prior to 2018, software based medical devices were not regulated in India (see e.g. here). For instance, AI solutions for diabetic retinopathy screening, software tools for 3D visualisation and quantitative analysis of liver anatomy, software to assess baseline and post-concussion state of an individual etc. were not treated as medical devices and hence not subject to regulation.

This changed in 2021, when the CDSCO notified 60 ‘medical devices pertaining to software’ as medical devices subject to regulation under the Medical Device Rules, 2017. This change was triggered due to amendments in 2020. The definition of ‘medical device’ was amended to explicitly include ‘software’ […] ‘which in its intended use assists in diagnosis, prevention, treatment or alleviation of disease or disorder […]’ etc.

However, unlike the US where the FDA has issued clarifications on certain ‘non-device software functions’, Indian regulations are silent on many aspects. For instance, the distinction between ‘medical’ software versus software used to encourage healthy lifestyles is not clearly dealt with in the rules or notifications. Are apps that use AI to recommend healthy food choices, track sugar levels etc. medical devices or simply software solutions that encourage healthy living? Other grey areas include defining how best to regulate pure software solutions e.g. AI / ML could be used to support diagnosis and may be agnostic about devices e.g. any device may send data to a solution provider’s AI for the AI to determine whether the patient suffers from a particular disease. These and many other regulatory concerns remain to be fleshed out.

Further, given that India relies on imports for medical devices, it is useful to look to the West to understand challenges being faced in product regulation in these markets.

As explained by Sara Gerke here – of the 340 AI /ML devices cleared in the US, most have been cleared via the 510(k) premarket notification pathway as low (class I) to moderate risk (class II) devices that are substantially equivalent to predicate devices. As pointed out, this pathway doesn’t usually require submission of clinical evidence to show that these devices are safe and effective across patient populations. While the FDA requires clinical studies to be conducted for certain medical devices such as those subject to premarket approval (usually class III devices that are high risk), clinical studies are scant when it comes to AI / ML medical devices. Clinical trials / studies are standard ways to show safety and effectiveness and lack of such studies, when it comes to AI / ML medical devices, raises concerns regarding their safety and effectiveness. Other concerns include reliance on old (preamendment) predicate devices for the 510(k) pathway.

Added to this, issues such as algorithmic bias increase risk and reduce effectiveness. Bias due to training datasets not being representative enough, context in which such systems were trained and used etc. could result in ‘imported’ devices being ineffective or unsafe for the Indian context. Even if training datasets are representative, Cohen and Gerke explain ‘contextual bias’ in the healthcare system which could result in improper recommendations (e.g. recommendations may vary between specialist hospitals and non-specialist or rural hospitals).

These issues become relevant since the Medical Device Rules, 2017 waive the requirement of undertaking clinical investigations for imported medical devices that do not have predicates in India. Independent clinical investigations are not required if the device has been approved by the FDA (and certain other regulatory authorities) and the device has been marketed for two years in the US (and certain other jurisdictions). .

The proviso to Rule 63 contains this waiver. However, there is some ambiguity in its application. The proviso is limited to ‘investigational medical devices’ but commentary suggests that the waiver applies to medical devices in general. Readers who may know how to interpret this waiver, please do share with us as well.

Will such a waiver apply to AI / ML medical devices? Though these devices are approved by foreign regulatory authorities, there are questions around their safety and effectiveness. There is also a risk that these concerns may not be identified or resolved in the two year marketing period. This is because the technology itself may not be ‘explainable’ or may be covered by trade secrets. Or if identified, may apply to different contexts. AI / ML devices may perform differently in the Indian context – e.g. if training datasets are not representative, opaque algorithmic decision-making (especially ‘black-box’ concerns), differences in needs and contexts etc. One solution could be to require pre-marketing clinical investigations along with post market studies instead of the waiver. Another suggestion could be to increase sharing of clinical trial data to enable better oversight and quicker issue spotting.

Many other issues remain to be resolved in the AI / ML medical device space in India. For instance, the definition of medical device was amended in 2020 to include all software which in its intended use assists in diagnosis, prevention, treatment or alleviation of disease or disorder […] etc. However, in 2021, the CDSCO listed only 60 specific software based medical devices in its risk classification. The international market appears to have many more medical devices (~340). Could this mean that some devices may not fall under the regulatory ambit at all? The cat and mouse game continues…

Thank you to Swaraj for comments!


Viewing all articles
Browse latest Browse all 2952

Trending Articles