Editorial

The alarm bell on AI risks

India has been ranked high for artificial intelligence (AI) skill penetration, capability, and policy.

Sentinel Digital Desk

India has been ranked high for artificial intelligence (AI) skill penetration, capability, and policy. The spectacular achievement has come with the concern for responsible use of AI and apprehension over data quality. The India AI Mission, approved by the central government in March, is aimed at making the country a global leader in AI by focusing on seven key areas. These include building a high-end, scalable AI computing ecosystem, establishing an innovation centre, enhancing access, quality, and utilisation of public sector datasets to make them AI-ready, and increasing the number of graduates, postgraduates, and researchers in the domain. The central government approved a total outlay of Rs 10,371 crore for the mission for a period of five years, which is indicative of the AI boom in the country getting momentum over the next few years. The unveiling of the country’s first practical AI Data Bank earlier this month marks a significant step forward in AI use. The AI data bank aims to accelerate technological growth and innovation by providing researchers, startups, and developers access to high-quality, diverse datasets essential for creating scalable and inclusive AI solutions. Union Minister Jitendra Singh, while launching this data bank, underscored its strategic importance in enhancing national security through real-time analytics of data, which, he says, aligns with India’s goal to utilise AI for predictive analytics in disaster management and cybersecurity. The spread of digital technology in communication, education, healthcare, research, innovation, banking, shopping, and governance has been rapidly changing our lives in a way that could not be imagined a few decades ago. It has transformed the world into a more connected place, making life much easier but not without challenges like cybercrime, online banking fraud, data breaches, and mental health issues. The challenges brought by digital technology are a reminder for government, policymakers, industry, and all other stakeholders to prioritise putting in place robust safeguards against any abuse of AI that may give rise to issues of AI bias, data privacy, data protection, and similar other issues that may adversely affect individuals or communities. The central government, however, allays apprehension over data quality by insisting that a robust content review policy has been put in place for reviewing data quality in the open government data platform through routine testing of web pages. The Digital Personal Data Protection (DPDP) Act, 2023 is expected to be implemented from next year after the rules are framed and notified. The act is aimed at safeguarding the personal data of individuals and ensuring processing of personal data for lawful purposes. The DPDP act stipulates that appropriate technical and organisational measures must be implemented for processing of the personal data, and reasonable security safeguards must be taken to prevent any personal data breach. Currently, the Information Technology (Reasonable Security Practices and Procedures and Sensitive Personal Data or Information) Rules, 2011, framed under the Information Technology Act, 2000, provide a framework required for the application of reasonable security practices and procedures like informed consent, access control, etc. However, there are concerns about the gap in the existing legal regime to deal with situations like data misuse by platform companies running social media networks having headquarters outside India and compliance of the rules by such companies. The Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021, make it mandatory for social media intermediaries and platform companies to ensure their accountability towards a safe and trusted internet, including their expeditious action towards the removal of the prohibited misinformation, patently false information, and deepfakes. However, disinformation, misinformation, and harmful content continuing to circulate and grip social media raise doubt over the effectiveness of the legal regime. Such gaps have pressed the alarm bell on risks and dangers of AI without a strong legal regime to curb its irresponsible use. If AI systems use bad data to train algorithms, then it can potentially lead to unimaginable dangers to humans. Rising incidents of cybercriminals using AI-generated images and voice cloning to dupe people through digital arrests to empty their bank accounts and steal their personal digital identities call for making responsible AI use mandatory with adequate legal provisions a top priority in the country. Abuse of AI by cybercriminals has already posed a huge challenge to cyber patrolling, which is still in the process of developing. The development of effective AI tools to counter such threats is crucial to building people’s trust in the country’s AI capabilities, skills, and tools to transform lives for the better. Apart from the rise in cybercrimes, other concerns like job loss due to AI use for automation cannot be ignored in a country like India, already grappling with mounting unemployment and a livelihood crisis. For the country’s AI strategy to balance the risks and opportunities of AI, it is essential that people’s concerns and apprehensions are given top priority while exploring and developing AI-based solutions and applying those to solve problems they face.