AI: Possible Regulations

bionic hand and human hand finger pointing

AI: Possible Regulations

Thailand is shaping the future of artificial intelligence (AI) through a public consultation led by the Electronic Transactions Development Agency (ETDA) from May 10 to June 9, 2025. AI offers immense potential to boost economic growth, enhance industries, and improve quality of life, but it also raises concerns about data privacy, system security, fairness, and ethical impacts. Building on lessons from earlier drafts, such as the 2022 Royal Decree on Business Operations using AI Systems and the 2023 Act on AI Innovation Promotion, which faced criticism for broad frameworks and burdensome rules, ETDA’s new draft principles aim to balance innovation with responsibility. The consultation seeks public input on four key areas: AI risk management, industry promotion, user rights, and enforcement mechanisms.

1. AI Risk Management Principles:

To ensure safe AI development and use, the draft principles propose a risk-based approach:

  • Risk Level Assignment: Instead of fixed criteria for Prohibited-risk or High-risk AI, industry-specific regulators will collaborate to define these categories, tailoring rules to sector-specific risks.
  • Domestic Representatives for Foreign Providers: Foreign AI providers must appoint local legal representatives to ensure compliance with Thai regulations.
  • Incident Reporting: High-risk AI providers are required to report serious incidents to authorities, enabling a swift response to potential harms.
  • User Responsibilities: AI users must maintain human oversight, log activities, ensure high-quality input data to avoid bias, assess potential impacts, and cooperate with regulators.

These measures aim to address risks like misalignment with ethical AI governance and future high-impact concerns, as highlighted by global incident databases (e.g., OECD AI Incidents Monitor).

2. Promoting the AI Industry:

To foster innovation, the draft principles encourage AI development while addressing past criticisms of regulatory burdens:

  • Text and Data Mining (TDM): Inspired by the EU, Thailand proposes allowing TDM for research and development, enabling access to copyrighted data for AI training.
  • Regulatory Sandboxes: Controlled testing environments will support real-world AI trials, reuse of privacy data for public-interest projects, or penalty-free “safe harbor” zones for developers.
  • AI Governance Clinic (AIGC): Established in 2022, the AIGC provides technical and practical guidance to public and private sectors, streamlining compliance and innovation.

These initiatives aim to create a supportive ecosystem for AI startups and established players alike.

boston dynamics robot in a car factory

3. Rights of AI Users:

The draft principles prioritize user protections, ensuring individuals affected by high-risk AI have clear rights:

  • Right to be Informed: Individuals must be notified when AI impacts them, fostering transparency.
  • Right to Explanation: Users can request clear explanations of how AI systems function and influence decisions, particularly when affecting their lives or security.
  • Right to Oppose AI Decisions: Individuals can challenge AI-generated decisions and request human-led alternatives, safeguarding autonomy.

These rights, inspired by frameworks like the EU AI Act, empower users and build public trust in AI systems.

4. Enforcement and Penalties:

To ensure compliance, the draft principles outline enforcement and penalty mechanisms, with a focus on emerging risks:

  • Generative AI Misuse: Beyond existing laws like the Computer-Related-Offense Act, ETDA is exploring criminal penalties for misuse of generative AI, such as creating deepfakes or election-related misinformation (e.g., false audio, images, or videos).
  • Enforcement Measures: Administrative orders can halt the use or distribution of Prohibited-risk or misused High-risk AI. If violations persist, regulators may propose collaboration with the Ministry of Digital Economy and Society to restrict access via internet service providers, though this is under consultation.
  • Penalties: Penalties remain flexible, potentially including fines, disqualification from innovation support, or administrative measures, depending on whether governance is mandatory or voluntary. Specifics are still under review.

Conclusion:

ETDA’s public consultation advances Thailand’s efforts to foster an environment where AI innovation thrives while ensuring responsible use and protection for stakeholders. This consultation offers a vital opportunity for stakeholders to shape AI governance, enabling technological progress while upholding public trust and ethical standards.

Author: Panisa Suwanmatajarn, Managing Partner.

Other Articles