
US President Joe Biden’s Executive Order on the Development and Use of Artificial Intelligence (AI) has set the tone for renewed global discourse on AI risks and safeguards. Other countries, including India, are expected to take a cue from the Biden administration’s move and articulate their own regulatory framework based on their national strategies for managing AI risks and harnessing its vast opportunities. The “Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence” signed by Biden states that “responsible AI use has the potential to help solve urgent challenges while making our world more prosperous, productive, innovative, and secure. At the same time, irresponsible use could exacerbate societal harms such as fraud, discrimination, bias, and disinformation; displace and disempower workers; stifle competition; and pose risks to national security.” It emphasises that harnessing AI for good and realising its myriad benefits requires mitigating its substantial risks. The executive order defines AI as a machine-based system that can, for a given set of human-defined objectives, make predictions, recommendations, or decisions influencing real or virtual environments and adds that AI systems use machine- and human-based inputs to perceive real and virtual environments; abstract such perceptions into models through analysis in an automated manner; and use model inference to formulate options for information or action. The executive order requires developers of the most powerful AI systems to share information with the US government about their safety test results and other critical information so that it can be ensured that AI systems developed by companies are safe, secure, and trustworthy before making them public. It has provisions for protecting Americans from AI-enabled fraud and deception through content authentication and watermarking to label AI-generated content. The rapid spread of digital technology and innovation has exposed people to emerging technologies, but the majority of people across the globe are unaware of the associated risks of unregulated use of such technologies. This has allowed cybercriminals to abuse emerging technologies to target banking and other financial systems, stealing the identities of people, hacking into their bank and other financial accounts, illegally withdrawing money, etc. Experts have already sounded the alarm bell over AI abuse by cybercriminals to dominate cyberspace and automate malware to target banking and financial systems and hack into public information data. FraudGPT is one such example of AI use in the dark web to automate undetectable malware, write malicious codes, and generate deceptive content, making cyber policing a tougher challenge. The development of ChatGPT has triggered competition among tech companies for the development of AI systems, but the development of malicious AI tools like FraudGPT demonstrates that unregulated AI development and use could prove to be dangerous. Industry estimates that AI will contribute an additional 957 billion US dollars to the Indian economy by 2035, which is indicative of the pace of AI growth in the country. The Central Government spelled out India’s position on regulating AI. It recognises AI as a kinetic enabler of the digital economy and innovation ecosystem. The government is harnessing the potential of AI to provide personalised and interactive citizen-centric services through digital public platforms. However, AI has ethical concerns and risks due to issues such as bias and discrimination in decision-making, privacy violations, a lack of transparency in AI systems, and questions about responsibility for harm caused by it. These concerns have been highlighted in the National Strategy for AI (NSAI), released in June 2018, the government told the Lok Sabha in April. It also informed the House that to address the ethical concerns and potential risks associated with AI, various central and state government departments and agencies have commenced efforts to standardise responsible AI development, use, and promotion of best practices. Additionally, NITI Aayog has published a series of papers on the subject of responsible AI for all. However, the government is not considering enacting a law or regulating the growth of artificial intelligence in the country. The US executive order is likely to trigger a new thought process of making a stricter regulatory regime to tap the potential and ensure that the development of AI does not pose any threat to people, the nation, or the security of the country. The US order has provisions to attract AI talents for promotion of AI development, and this can lead to brain drain from developing nations like India, Brazil, and China. The emerging market economies cannot ignore this reality and will have to build an ecosystem for safe and secure AI development within the country so that their talent pool is not lost to the developed world and the population in these economies is not deprived of the benefits of ethical AI development. While countries formulate their regulatory framework for AI development and application, it is important to build awareness among people to facilitate their expressing an informed opinion about whether AI development is a boon or bane.