Introduction
Artificial Intelligence (AI) is revolutionizing industries, economies, and societies, driving innovation in healthcare, education, finance, and beyond. However, its rapid adoption raises critical risks, including threats to human safety, data privacy violations, algorithmic bias, and societal disruptions like job displacement. As AI systems become more autonomous, robust legal frameworks are essential to mitigate these risks while fostering ethical innovation. This article examines whether any country has implemented laws to regulate AI and provides a detailed roadmap for India, analyzing its current regulatory landscape, proposing new AI-specific laws, and recommending amendments to existing laws. Grounded in global trends and India’s unique socio-economic context as of June 16, 2025, this article offers a comprehensive strategy to ensure AI’s safe, equitable, and inclusive deployment in India, positioning it as a global leader in responsible AI governance.

Table of Contents
1. Global Overview: Existing AI Regulatory Frameworks
Several countries have enacted AI-specific laws or adapted existing regulations to address AI’s challenges, while others rely on non-binding guidelines. Below is a streamlined overview of key global approaches, highlighting legislative milestones and gaps.
1.1 European Union
The EU Artificial Intelligence Act (2024) is the world’s first comprehensive AI-specific legislation, effective from August 2024 with full enforcement phased through 2026. It employs a risk-based framework, categorizing AI systems into four tiers:
- Unacceptable Risk: Systems like real-time public biometric surveillance are banned, except in narrowly defined cases with judicial oversight.
- High Risk: Applications in healthcare, law enforcement, and hiring require risk assessments, human oversight, transparency, and cybersecurity measures.
- Limited Risk: Systems like chatbots must disclose AI use to users.
- Minimal Risk: Low-risk AI, such as recommendation algorithms, faces minimal oversight.

The Act mandates labeling for generative AI outputs (e.g., deepfakes) and establishes innovation sandboxes to support startups. Non-compliance penalties can reach €35 million or 7% of global revenue (European Commission, 2024). The EU’s pioneering approach influences global standards, serving as a model for risk-based regulation.
1.2 China
China has implemented targeted AI regulations, balancing innovation with state oversight. The Interim Measures for the Management of Generative AI Services (2023) require generative AI providers to align content with national values, label AI-generated outputs, and prevent misinformation. The Algorithmic Recommendation Management Provisions (2022) mandate user consent and fairness in AI-driven recommendations (Cyberspace Administration of China, 2023). Sector-specific rules govern AI in finance, healthcare, and autonomous vehicles, emphasizing national security. China’s approach is among the most prescriptive, reflecting its centralized governance model.

1.3 United States
The United States lacks a unified federal AI law, relying on sector-specific regulations and state-level initiatives. The National AI Initiative Act (2020) promotes AI research, while the Federal Aviation Administration Reauthorization Act (2024) addresses AI safety in aviation. State laws, such as Utah’s Artificial Intelligence Policy Act (2024), require disclosure of generative AI use in consumer interactions, with fines up to $2,500 per violation (White & Case, 2025). California’s SB 1047 (2024) mandates safety protocols for large-scale AI models, including risk assessments and emergency shutdown mechanisms (California Legislature, 2024). The Federal Trade Commission (FTC) enforces fairness, as seen in a 2023 settlement with Rite Aid over biased facial recognition (Morgan Lewis, 2024). The Executive Order on AI (2023) provides non-binding guidance for federal agencies (White House, 2023). The US’s fragmented approach reflects its innovation-driven economy but risks regulatory inconsistency.

1.4 United Kingdom
The UK employs a decentralized, pro-innovation approach, with no comprehensive AI law. Sector-specific regulators, such as the Financial Conduct Authority, apply existing laws to AI, while the Online Safety Act (2023) tackles AI-generated harmful content, including deepfakes (UK Parliament, 2023). The AI Regulation White Paper (2023)proposes a risk-based framework, but legislation remains under discussion, prioritizing flexibility over strict mandates (UK Government, 2023).
1.5 Japan
Japan favors non-binding guidelines, such as the AI Governance Report (2023), to promote ethical AI use (Ministry of Economy, Trade, and Industry, 2023). Sector-specific laws, like the amended Financial Instruments and Exchange Act (2006), regulate AI in algorithmic trading. Proposals for targeted laws addressing harms like deepfakes are emerging, but Japan emphasizes voluntary compliance to foster innovation (White & Case, 2024).

1.6 Singapore
Singapore’s Model AI Governance Framework (2020) provides non-binding guidance on transparency and accountability (Infocomm Media Development Authority, 2020). Sector-specific regulations, particularly in finance, address AI risks, but no dedicated AI law exists. Singapore’s approach supports its role as a global tech hub while encouraging responsible AI adoption.
1.7 Other Countries
- Canada: The proposed Artificial Intelligence and Data Act (2022), still under review, aims for risk-based regulation of high-risk AI systems (Government of Canada, 2022).
- Australia: The AI Ethics Framework (2019) guides responsible AI use, with draft legislation for high-risk AI under consultation (Australian Government, 2019).
- Saudi Arabia: The National Strategy for Data and AI (2020) promotes AI adoption, but no binding laws exist (Saudi Data and AI Authority, 2020).
- African Nations: Kenya and Nigeria are drafting AI policies focusing on ethics and capacity-building, but enforceable laws are absent (White & Case, 2024).
Summary: The EU and China lead with AI-specific laws, while the US, UK, Japan, and Singapore rely on sector-specific regulations or guidelines. Developing nations, including India, are formulating policies but lag in binding legislation, creating a diverse global regulatory landscape.
2. India’s Current AI Regulatory Landscape
India currently has no dedicated AI-specific laws or statutory regulations, as confirmed by legal and industry analyses (White & Case, 2024; India Briefing, 2024; Morgan Lewis, 2024). Instead, AI is governed indirectly through existing laws, non-binding guidelines, and government advisories. Below is a detailed examination of India’s regulatory framework, highlighting strengths, gaps, and recent developments.
2.1 Existing Laws Relevant to AI
The following laws indirectly regulate AI applications, but their general scope limits effectiveness for AI’s unique challenges:
- Information Technology Act, 2000 (IT Act): The IT Act, with the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021 (IT Rules), governs digital platforms, including AI-driven ones. The IT Rules require intermediaries to remove harmful content, such as deepfakes, within 36 hours and ensure AI systems avoid bias or discrimination. A Ministry of Electronics and Information Technology (MeitY) Advisory (March 2024) mandates government approval for deploying “unreliable” AI models, large language models (LLMs), or generative AI, and requires labeling AI-generated content to combat misinformation. However, ambiguous terms like “significant platforms” hinder enforcement clarity (Lexology, 2024).
- Digital Personal Data Protection Act, 2023 (DPDPA): Enacted but not yet enforced as of June 2025, the DPDPA regulates personal data processing, including by AI systems. It emphasizes consent and data minimization but exempts publicly available data, allowing AI systems to scrape social media without user permission. Unlike the EU’s GDPR, it lacks provisions for objecting to automated decision-making or requesting human review, weakening protections against AI-driven decisions (Chambers and Partners, 2024; Law.asia, 2024).
- Copyright Act, 1957, Trade Marks Act, 1999, and Patents Act, 1970: These intellectual property (IP) laws apply to AI-generated content but deny AI legal personhood, meaning AI cannot own creations. Ownership defaults to human creators or developers, creating ambiguity for AI-generated works, such as art or music, and leaving creators vulnerable to unauthorized data scraping (Law.asia, 2024).
- Consumer Protection Act, 2019: This law addresses consumer harms from AI, such as misleading advertisements or biased outputs, but lacks AI-specific provisions. Consumers can seek remedies for unfair practices, but enforcement against AI-driven harms is limited due to vague applicability (Carnegie Endowment, 2024).
- Sector-Specific Regulations:
- Finance: The Securities and Exchange Board of India (SEBI) regulates AI in algorithmic trading, requiring transparency and risk management (SEBI, 2023).
- Healthcare: AI medical devices must comply with the Indian Medical Council Act, 1956, and telemedicine guidelines, but these are not AI-specific (Ministry of Health, 2020).
- Education: The Right to Education Act, 2009, and DPDPA apply to AI in education, but gaps persist in addressing fairness and privacy (Ministry of Education, 2023).

2.2 Non-Binding Guidelines and Initiatives
India has developed non-binding frameworks to guide AI development, reflecting a commitment to ethical and inclusive AI:
- National Strategy for Artificial Intelligence (2018): Launched by NITI Aayog under the #AIForAll initiative, this strategy promotes AI in healthcare, education, agriculture, smart cities, and transportation. It emphasizes responsible AI, focusing on ethics, privacy, and security to position India as a global AI hub (NITI Aayog, 2018).
- Principles for Responsible AI (2021): NITI Aayog’s two-part approach paper outlines ethical principles: transparency, accountability, inclusivity, safety, and privacy. Part 1 (February 2021) addresses system-level (e.g., decision-making) and societal (e.g., job impacts) considerations, while Part 2 (August 2021) proposes policy interventions and private-sector collaboration to operationalize these principles (NITI Aayog, 2021).
- NASSCOM Guidelines for Generative AI (2023): These voluntary guidelines promote responsible generative AI development, addressing risks like misinformation, bias, and IP violations (NASSCOM, 2023).
- TRAI Recommendations (2023): The Telecom Regulatory Authority of India (TRAI) advocates a nationwide AI regulatory framework across sectors, emphasizing risk-based oversight and a statutory authority to enforce standards (TRAI, 2023).
2.3 Government Investments and Global Engagement
- India AI Mission (2024): Approved in March 2024, this initiative allocates INR 103 billion (USD 1.25 billion) over five years for AI infrastructure, including computing resources, LLMs, and a National Data Management Office to enhance data quality for AI training (India Briefing, 2024).
- Global Partnership on AI (GPAI): India’s membership and hosting of the 2023 GPAI Summit in New Delhi underscore its commitment to global AI governance, focusing on responsible AI and data protection (Morgan Lewis, 2024).
- OECD AI Principles: India aligns with these principles, emphasizing human-centric AI and transparency (OECD, 2019).
2.4 Recent Developments
- MeitY Advisory (March 2024): This advisory targets large platforms, requiring bias-free AI models, metadata labeling for generative AI outputs, and government approval for untested AI systems. It exempts startups to foster innovation but lacks enforcement clarity due to vague terminology (Lexology, 2024).
- Deepfake Litigation: A 2023 viral deepfake video prompted MeitY advisories under IT Rules to combat misinformation. The Delhi High Court is reviewing a public interest litigation (PIL) on unregulated AI and deepfakes, signaling judicial pressure for regulation (India Today, 2024).
- Digital India Act (DIA): Proposed in 2022 to replace the IT Act, the DIA aims to address AI regulation, online safety, and data protection. As of June 2025, it remains in draft, with public consultations pending (SNR Law, 2023).
2.5 Gaps in India’s Current Framework
India’s reliance on general laws and advisories reveals significant gaps:
- No AI-Specific Legislation: The IT Act and DPDPA are too broad to address AI’s complexities, such as autonomous decision-making or generative AI risks.
- Ambiguous Advisories: The MeitY advisory’s vague terms (e.g., “significant platforms”) create compliance uncertainty.
- Weak Privacy Protections: The DPDPA’s exemption for public data enables unchecked AI scraping, and the absence of automated decision-making rights lags behind GDPR.
- IP Ambiguity: Unclear ownership of AI-generated content hinders innovation and creator rights.
- Sectoral Gaps: Healthcare, education, and employment lack AI-specific regulations, risking harm in critical areas.
- Fragmented Enforcement: Without a dedicated AI regulator, oversight is scattered across ministries, reducing effectiveness.
3. Proposed New Laws for India to Regulate AI
To address these gaps, India requires dedicated AI-specific legislation tailored to its socio-economic context, drawing on global models like the EU AI Act and China’s regulations. Below are refined proposals for new laws, designed to ensure human safety, protect data and privacy, mitigate bias, and address emerging risks.
3.1 Artificial Intelligence and Data Authority of India (AIDAI) Act
This foundational law would establish the AIDAI, a statutory body to oversee AI development and regulation, as recommended by TRAI (TRAI, 2023). The AIDAI’s responsibilities would include:
- Risk-Based Regulation: Classify AI systems as low, medium, high, or unacceptable risk, with stricter oversight for high-risk systems (e.g., autonomous vehicles, judicial AI).
- Audits and Certification: Conduct regular audits of high-risk AI systems and certify compliance with safety and ethical standards.
- Licensing: Require licenses for developers of advanced AI models, such as LLMs, to ensure accountability.
- Sectoral Coordination: Collaborate with regulators like SEBI, RBI, and the Ministry of Health to align AI standards across industries.
- Innovation Support: Provide testing sandboxes for startups and researchers, balancing regulation with growth, as seen in the EU AI Act (European Commission, 2024).
- Public Engagement: Facilitate public consultations to ensure inclusive AI governance, enhancing democratic legitimacy.

The AIDAI would centralize enforcement, streamline compliance, and align India with global standards.
3.2 AI Safety and Ethics Act
This law would mandate safety and ethical standards for AI, addressing risks to human safety, bias, and societal harm. Key provisions include:
- Mandatory Risk Assessments: Require developers to conduct and publicly disclose risk assessments for high-risk AI systems, evaluating physical safety, bias, and societal impacts (e.g., misinformation).
- Human-in-the-Loop Oversight: Mandate human review for critical AI decisions, such as loan approvals, medical diagnoses, or criminal sentencing, to prevent over-reliance on automation.
- Bias Mitigation: Require regular audits for biased AI outputs, with remediation plans and penalties for non-compliance, addressing issues like biased hiring algorithms (Science, 2019).
- Labeling AI Outputs: Enforce permanent metadata or watermarks for generative AI content (e.g., images, videos) to combat deepfakes, building on MeitY’s 2024 advisory (Lexology, 2024).
- Prohibition of Harmful AI: Ban unacceptable-risk systems, such as fully autonomous lethal weapons or real-time biometric surveillance without oversight, aligning with EU standards (European Commission, 2024).
3.3 AI Liability Act
To clarify accountability, this law would establish a strict liability framework for high-risk AI systems, holding developers, deployers, or operators accountable unless they prove compliance with certified safety standards. For example, if an AI medical device misdiagnoses a patient, the developer could be liable unless safety protocols were followed, similar to EU proposals (European Parliament, 2023). The law would:
- Define liability for physical, financial, or psychological harms caused by AI.
- Provide legal recourse through courts or ombudsman offices.
- Incentivize developers to prioritize safety to avoid liability.
3.4 Generative AI Regulation Act
Given generative AI’s unique risks, such as misinformation, deepfakes, and IP violations, a dedicated law is needed. Provisions would include:
- Transparency in Training Data: Require developers to disclose datasets and methodologies used to train generative AI models, enabling audits for bias or unethical data use.
- User Consent: Mandate explicit, informed consent for using personal data in AI training, including scraped content from social media or public platforms.
- Prohibition of Harmful Content: Criminalize non-consensual deepfakes or AI-generated misinformation, with penalties for developers and distributors.
- Content Moderation: Require platforms to monitor and remove harmful AI-generated content, aligning with NASSCOM guidelines (NASSCOM, 2023).
3.5 AI in Employment Act
To address workforce impacts, this law would protect workers from unfair AI-driven decisions and mitigate job displacement. Key provisions include:
- Transparency: Require employers to disclose AI use in hiring, performance evaluations, or terminations, ensuring workers understand automated decisions.
- Right to Challenge: Grant workers the right to appeal AI-driven decisions, such as terminations, with human review.
- Reskilling Programs: Mandate companies deploying large-scale AI automation to fund reskilling initiatives, inspired by Singapore’s SkillsFuture program (SkillsFuture Singapore, 2023).
- Economic Safeguards: Encourage government pilots of Universal Basic Income (UBI) to address AI-driven unemployment, drawing on Finland’s 2017 experiment (KELA, 2019).
3.6 Child Safety in AI Act
To protect minors from AI-related harms, such as exposure to inappropriate generative AI content (e.g., deepfake pornography or grooming via chatbots), this law would:
- Mandate age-appropriate AI design, including content filters and parental controls.
- Require platforms to verify user ages and restrict harmful AI outputs for minors.
- Impose penalties for non-compliance, building on the US Children’s Online Privacy Protection Act (COPPA) (FTC, 2023).
3.7 AI in Healthcare Act
AI in healthcare, such as diagnostic tools or telemedicine, requires sector-specific regulation to ensure patient safety and data security. This law would:
- Mandate clinical validation and certification for AI medical devices, aligning with FDA guidelines (FDA, 2023).
- Require patient consent for AI-driven diagnoses or data processing.
- Enforce HIPAA-like protections for health data used in AI systems, addressing DPDPA gaps (Chambers and Partners, 2024).
3.8 AI in Education Act
AI in education, such as personalized learning or grading, raises privacy and fairness concerns. This law would:
- Update the Right to Education Act, 2009, to include AI-specific provisions for student data protection, aligning with the DPDPA.
- Mandate transparency in AI-driven grading or admissions to ensure fairness.
- Ensure equitable access to AI educational tools, addressing digital divides (Ministry of Education, 2023).
4. Existing Laws to Be Amended in India
To align with AI’s challenges, several existing laws require amendments to address gaps and ensure effective regulation. Below are targeted recommendations:
4.1 Information Technology Act, 2000 (and IT Rules, 2021)
- Amendments:
- Incorporate AI-specific provisions in the IT Rules, mandating disclosure of AI use in consumer interactions (e.g., chatbots) and liability for intermediaries hosting harmful AI-generated content, such as deepfakes.
- Clarify the MeitY 2024 advisory’s scope, defining “significant platforms” and “unreliable” AI models to ensure consistent enforcement.
- Strengthen penalties for non-compliance with AI labeling and bias mitigation requirements.
- Rationale: The IT Act’s broad framework is inadequate for AI’s complexity, particularly for generative AI and misinformation. Enhanced intermediary obligations will improve accountability (Lexology, 2024).
4.2 Digital Personal Data Protection Act, 2023 (DPDPA)
- Amendments:
- Grant individuals the right to object to automated decision-making and request human review, similar to GDPR’s Article 22.
- Require explicit consent for using personal data in AI training, including publicly available data, to prevent unauthorized scraping.
- Mandate transparency in AI data processing, requiring companies to disclose AI use in services (e.g., credit scoring).
- Rationale: The DPDPA’s exemption for public data enables unchecked AI scraping, risking privacy violations. Lack of automated decision-making rights undermines fairness (Chambers and Partners, 2024).
4.3 Copyright Act, 1957
- Amendments:
- Clarify ownership of AI-generated works, specifying whether creators, developers, or users hold copyrights.
- Introduce protections against unauthorized data scraping for AI training, addressing lawsuits against AI art generators (The New York Times, 2024).
- Rationale: Current IP laws create ambiguity for AI-generated content, hindering innovation and creator rights (Law.asia, 2024).
4.4 Consumer Protection Act, 2019
- Amendments:
- Include AI-specific provisions to address deceptive AI marketing, biased outputs, or unfair automated decisions (e.g., loan denials).
- Grant consumers the right to challenge AI-driven outcomes and seek remedies through consumer courts or ombudsman offices.
- Rationale: Consumers lack clear recourse against AI-driven harms, such as opaque credit scoring, under current provisions (Carnegie Endowment, 2024).
4.5 Indian Penal Code, 1860 (and Related Laws)
- Amendments:
- Introduce AI-specific offenses, such as creating or distributing non-consensual deepfakes, with criminal penalties.
- Expand cybercrime provisions to cover malicious AI use, like automated phishing or fraud, aligning with the US Computer Fraud and Abuse Act (US Department of Justice, 2023).
- Rationale: Deepfake-related harms, highlighted by high-profile cases in 2023, require stronger legal deterrents (India Today, 2024).
4.6 Sector-Specific Laws
- Indian Medical Council Act, 1956:
- Amend to include validation and certification requirements for AI medical devices, ensuring patient safety and data protections.
- Mandate patient consent for AI-driven diagnoses, addressing DPDPA gaps (FDA, 2023).
- SEBI Regulations:
- Update to mandate transparency and fairness in AI-driven financial algorithms, addressing risks like market manipulation (SEBI, 2023).
- Right to Education Act, 2009:
- Amend to include AI-specific provisions for student data protection and equitable access to AI educational tools (Ministry of Education, 2023).
- Rationale: Sector-specific gaps leave AI applications under-regulated, risking harm in critical areas like healthcare, finance, and education.
5. Implementation Considerations
Effective AI regulation in India requires addressing several challenges to ensure robust implementation:
- Enforcement: The AIDAI would centralize enforcement, but coordination with existing regulators (e.g., SEBI, RBI) is critical to avoid overlap. Fines, audits, and licensing revocations, as seen in GDPR’s enforcement (€1 billion in fines in 2023), can deter non-compliance (Data Protection Commission, 2024).
- Adaptability: AI’s rapid evolution demands principles-based laws updated by expert commissions, similar to the EU’s AI Board (European Commission, 2024).
- Global Alignment: India’s GPAI membership and OECD AI Principles commitment necessitate harmonization with global standards to avoid trade barriers and attract investment (Morgan Lewis, 2024).
- Balancing Innovation: Strict regulations could impact India’s AI ecosystem, projected to reach USD 14.72 billion by 2030 (Global Legal Insights, 2024). Startup exemptions and testing sandboxes, as clarified by MeitY’s advisory, can foster innovation (Lexology, 2024).
- Capacity Building: India’s regulatory infrastructure may lack AI expertise. International aid and public-private partnerships, like the India AI Mission, can bridge this gap (India Briefing, 2024).
- Public Trust: Transparent regulation and public consultations, as practiced in the EU, can build trust and address societal concerns (European Commission, 2024).
6. Conclusion
Globally, the EU and China lead with comprehensive AI-specific laws, while the US, UK, Japan, and Singapore rely on sector-specific regulations and guidelines. Developing nations like India are still formulating policies, with no dedicated AI laws but a framework of existing laws and non-binding guidelines. In India, the IT Act, DPDPA, and sector-specific regulations govern AI indirectly, supported by NITI Aayog’s #AIForAll strategy and MeitY’s 2024 advisory. To regulate AI effectively, India should enact new laws, including an AIDAI Act, AI Safety and Ethics Act, AI Liability Act, Generative AI Regulation Act, AI in Employment Act, Child Safety in AI Act, AI in Healthcare Act, and AI in Education Act. Existing laws, such as the IT Act, DPDPA, Copyright Act, Consumer Protection Act, and sector-specific regulations, require amendments to address AI-specific risks like bias, privacy violations, deepfakes, and sectoral gaps. By implementing these measures with robust enforcement, global alignment, and innovation-friendly policies, India can harness AI’s transformative potential while safeguarding its citizens, cementing its role as a global leader in responsible AI governance.
References
- Australian Government. (2019). AI Ethics Framework.
- California Legislature. (2024). SB 1047: Safe and Secure Innovation for Frontier Artificial Intelligence Models Act.
- Carnegie Endowment. (2024). India’s Advance on AI Regulation.
- Chambers and Partners. (2024). Artificial Intelligence 2024 – India.
- Cyberspace Administration of China. (2023). Interim Measures for the Management of Generative AI Services.
- Data Protection Commission. (2024). GDPR Enforcement Report 2023.
- European Commission. (2024). EU Artificial Intelligence Act.
- European Parliament. (2023). Proposal for AI Liability Directive.
- FDA. (2023). Guidelines for AI-Based Medical Devices.
- FTC. (2023). Children’s Online Privacy Protection Act.
- Global Legal Insights. (2024). AI, Machine Learning & Big Data Laws 2024 | India.
- Government of Canada. (2022). Artificial Intelligence and Data Act.
- India Briefing. (2024). Regulation of AI and Large Language Models in India.
- India Today. (2024). Delhi High Court Reviews PIL on AI and Deepfakes.
- Infocomm Media Development Authority. (2020). Model AI Governance Framework.
- KELA. (2019). Finland UBI Experiment Results.
- Law.asia. (2024). Balancing Artificial Intelligence, Ethics, and the Constitution.
- Lexology. (2024). Regulation of Artificial Intelligence in India: Scope of New Advisory.
- Ministry of Economy, Trade, and Industry. (2023). AI Governance Report.
- Ministry of Education. (2023). Right to Education Act Guidelines.
- Ministry of Health. (2020). Telemedicine Guidelines.
- Morgan Lewis. (2024). AI Regulation in India: Current State and Future Perspectives.
- NASSCOM. (2023). Guidelines for Responsible Generative AI.
- NITI Aayog. (2018). National Strategy for Artificial Intelligence.
- NITI Aayog. (2021). Responsible AI for All: Principles and Operationalization.
- OECD. (2019). OECD AI Principles.
- Saudi Data and AI Authority. (2020). National Strategy for Data and AI.
- Science. (2019). Bias in AI Hiring Tools.
- SEBI. (2023). Guidelines for Algorithmic Trading.
- SkillsFuture Singapore. (2023). SkillsFuture Program Report.
- SNR Law. (2023). Digital India Act: Proposed Framework.
- The New York Times. (2024). Lawsuits Against AI Art Generators.
- TRAI. (2023). Recommendations on Leveraging AI and Big Data in Telecom.
- UK Government. (2023). AI Regulation White Paper.
- UK Parliament. (2023). Online Safety Act.
- US Department of Justice. (2023). Computer Fraud and Abuse Act.
- White & Case. (2024). AI Watch: Global Regulatory Tracker – India.
- White & Case. (2025). AI Watch: Global Regulatory Developments.
- White House. (2023). Executive Order on Safe, Secure, and Trustworthy AI.