AI health software now mandates government licence and clinical trials
India has brought AI-based diagnostic software under medical device regulations, mandating government licences and clinical validation. The move subjects AI tools to safety standards similar to medical equipment and is expected to impact health-tech start-ups using AI in clinical care.
Published Date - 26 January 2026, 07:19 PM
Hyderabad: In a landmark development in the field of health-tech, Artificial Intelligence (AI) diagnostic software and tools have been formally brought under regulatory oversight in the country.
The regulatory authorities in India have now classified AI-based detection tools as medical devices, which means that the AI software will also be subjected to the same rigorous safety standards as medical hardware such as MRIs, CT scans and implants.
The directives to this effect were released by the Central Drugs Standard Control Organisation (CDSCO) this month. The notification stated that AI software used to detect or diagnose critical conditions such as cancer and heart blocks through X-rays, CT scans and MRIs have now been categorised as Class C medical devices, indicating moderate-to-high risk to patients.
Before this, almost all AI diagnostic tools implemented in corporate hospitals in Hyderabad and elsewhere operated in a grey zone, often deployed under the guise of research or wellness applications without standardised clinical validation.
Following the reclassification of AI software as medical devices, developers must now secure a manufacturing or import licence from the Central Licensing Authority before their software can be used in clinical practice, that is, on patients.
A clear regulatory framework ensures that AI is ethically deployed. The new rules mandate that AI tools must be validated on Indian patients to ensure accuracy when the tools are actually utilised for patient treatment.
Senior health officials familiar with the use of AI tools in healthcare have said that the new regulatory framework is likely to pose challenges for health-tech start-ups.
Start-ups and health-tech companies involved in developing AI tools for medicine must now implement robust Quality Management Systems (QMS). They must also strictly adhere to post-market surveillance protocols, which include mandatory reporting of adverse events such as misdiagnosis caused by algorithm-related errors.