
Artificial Intelligence (AI) has rapidly emerged as one of the most influential forces transforming the insurance sector. Its potential spans across underwriting, pricing, fraud detection, claims automation, customer service, sales forecasting, and risk analytics. Insurers worldwide are turning to AI to improve accuracy, speed, efficiency, and customer experience.
However, while AI promises a future of intelligent and automated insurance operations, its implementation is far more complex than it appears. The insurance sector, characterised by strict regulation, sensitive customer data, legacy systems, and high liability, faces unique barriers that make AI adoption challenging, risky, and expensive.
Regulatory Uncertainty and Evolving AI Governance
India’s regulatory approach to AI is still taking shape. In January 2025, the Ministry of Electronics and Information Technology’s sub-committee published its report on “AI Governance and Guidelines Development,” attempting to identify key issues and propose recommendations for developing an AI regulatory framework.
IRDAI introduced the Information and Cyber Security Guidelines in 2023 that mandate regulated entities to adopt a risk-based approach and implement necessary measures to secure data management. Additionally, IRDAI introduced the Regulatory Sandbox Regulations in 2025 to foster innovation while ensuring structured growth.
The regulatory framework also creates friction with the Digital Personal Data Protection Act. The DPDP Act marks a significant shift in India’s data protection and privacy landscape, with far-reaching implications around consent requirements and data principal rights. Insurers face uncertainty around:
- Acceptable data sources for model training
- Cross-border data usage and storage
- Mandatory explainability norms
- AI audit frameworks and liability in automated decisions
- Cybersecurity standards for AI systems
This overlap between the DPDP Act and existing IRDAI regulations adds complexity, with regulated entities potentially attempting to rely on sector-specific regulations to avoid full compliance.
Unstructured, Incomplete, and Fragmented Data
AI thrives on clean, structured, and well-labelled data. Insurance data, unfortunately, is often the opposite. The Star Health breach in August 2024, affecting over 31 million customers and involving 7.24 terabytes of sensitive data, highlighted how fragmented data management creates vulnerabilities.
Common data challenges include:
- Scanned proposal forms and handwritten notes from agents
- Hospital discharge summaries in non-standard formats
- PDF customer declarations and telephonic claim descriptions
- Vehicle damage images of inconsistent quality
- Claims files stored in multiple databases or branches
This results in poor-quality training datasets, inaccurate predictions, model drift and inconsistency, error-prone fraud detection models, and failure of automated underwriting engines. Without structured datasets and data governance frameworks, AI cannot deliver reliable outcomes.
Algorithmic Bias in AI Models
AI learns from historical data. If previous underwriting, claims, or pricing practices contained biases, AI will replicate and even reinforce that bias. While insurance companies are not allowed to use certain protected characteristics to directly discriminate against policyholders, they continue to discriminate indirectly through proxy variables that correlate with protected characteristics.
Examples of potential bias include:
- Charging higher premiums for customers in certain PIN codes
- Rejecting claims for first-time policyholders based on historical patterns
- Penalising customers with non-standard professions
- Prioritising urban customers over rural applicants
- Incorrect fraud flags based on socio-demographic markers
In markets like India, where diversity is vast, biased AI systems can unfairly disadvantage large segments of customers. Regulators may also intervene if AI appears discriminatory, similar to EU’s GDPR and US state-level AI fairness legislations.
Lack of Auditability and Explainability
Insurance decisions, pricing, risk selection, and claims, are heavily scrutinised. Regulators, ombudsmen, and customers often demand: Why was a claim rejected? Why was the premium increased? Why was a customer flagged as “high risk”?
NITI Aayog’s paper on Responsible AI emphasizes that the design and functioning of AI systems should be recorded and made available for external scrutiny and audit to ensure deployment is fair, honest, impartial and guarantees accountability. Black-box AI models cannot easily explain their reasoning, making compliance difficult, especially after the DPDP Act and consumer protection laws.
Insurers must adopt Explainable AI (XAI) frameworks, but such frameworks are still evolving, and adoption remains low. Audit trails are required to capture the entire decision-making process, allowing insurers to demonstrate to IRDAI exactly how recommendations were generated.
Hallucinations and Errors in Generative AI
As more insurers adopt GenAI for policy wordings, claim explanations, customer support chatbots, and underwriting summaries, they face a new risk: hallucinations, where AI confidently produces plausible but entirely incorrect answers.
Common hallucination issues:
- Incorrect policy clauses or exclusions
- Fabricated regulatory citations
- Wrong claim interpretation and misleading explanations to customers
- Misquoting IRDAI guidelines on claim settlement timelines
Such errors create regulatory risk, financial exposure, and reputational damage. Insurers must build strong human-in-the-loop (HITL) frameworks to prevent these “AI hallucination mishaps.”
Data Privacy Concerns and Compliance Obligations
The insurance sector handles one of the highest volumes of sensitive personal data, including medical records, financial information, identity proofs, biometrics, and geolocation data via telematics. The insurance value chain includes multiple data fiduciaries and processors, such as insurance agents, brokers, third-party administrators, and web aggregators, that handle personal data, creating numerous potential points of vulnerability.
AI systems require large quantities of such data for training, raising concerns about:
- Consent management and data minimisation
- Purpose limitation and secure storage
- Vendor management risks and cross-border data sharing
The DPDP Act, 2023, along with IRDAI guidelines, imposes strict obligations on insurers, making AI deployment complex and audit-intensive.
Shortage of Skilled, Cross-Functional Talent
AI deployment requires professionals who understand actuarial science, machine learning, insurance domain knowledge, data science, cybersecurity, and regulatory compliance. India has 416K AI professionals against a demand of 629K (51% gap). By 2026, demand is expected to exceed 1 million professionals, with ML Engineer, Data Scientist, and DevOps Engineer roles showing 60-73% demand-supply disparity.
This leads to high hiring costs, dependence on external vendors, inconsistent model performance, and knowledge gaps in internal teams. Upskilling the workforce is critical to AI’s long-term success.
The Path Forward: Mitigation Strategies
These challenges are formidable, but they’re not insurmountable. Insurers must adopt domain-based approaches rather than scattered use cases, implement human-in-the-loop systems for critical decisions, and conduct regular bias audits. Consolidating fragmented systems, implementing data governance standards, and establishing quality controls for data entry are essential.
Prioritizing AI solutions that provide transparent reasoning, implementing privacy-by-design principles, and conducting regular security audits will build trust. Partnering with educational institutions for insurance-specific AI curricula and creating internal upskilling programs can bridge the talent gap.
The opportunity AI presents to Indian insurers is real and substantial. But realizing this potential requires confronting challenges head-on with a balanced approach that prioritizes both innovation and responsibility. The question isn’t whether AI will transform insurance in India, it’s whether insurers will manage that transformation responsibly, turning challenges into competitive advantages while maintaining the trust that is fundamental to the insurance business itself.
Authored by:
Yukti Agarwal, Associate Editor, The Insurance Times

