Vietnam Has an AI Law. Its AI Reality Is More Uneven.

Vietnam just became the first country in Southeast Asia to enforce a comprehensive AI law. The legislation took effect March 1, establishing a risk-based regulatory framework, requiring human oversight of high-risk AI systems, mandating labeling of AI-generated content, and applying to foreign and domestic entities alike. By any measure, it is an ambitious piece of governance.

It is also governing a reality that, in many parts of the country, does not yet seem fully formed in everyday life.


I spent time in Vietnam recently. In Ho Chi Minh City and Hanoi, the signs of a modernizing economy were everywhere — ride-hailing apps, QR-code payments, construction cranes above new commercial towers. But in Hoi An, where the central market still functions as the commercial engine of the old town, tailors and vendors preferred cash. Cards were accepted reluctantly, if at all. In the daily commerce I was part of, AI was not visible. The gap between the economy Vietnam’s AI law is designed to regulate and the economy many people are actually navigating is not a minor detail.


That gap is not a reason to dismiss the law. If anything, it clarifies Vietnam’s strategy. The country appears to be building regulatory infrastructure before AI systems become deeply embedded, asserting digital sovereignty before foreign platforms become unavoidable. The law calls for a national AI computing center and the development of large language models in Vietnamese. It places AI governance inside the country's 2045 development ambitions, framing it as economic strategy as much as risk management. That distinguishes it from many Western approaches, which often arrive after harms are already visible.


But AI governance frameworks are now emerging across jurisdictions with very different relationships to the technology they regulate. South Korea’s AI Basic Act took effect in January with extraterritorial reach. California now has three AI laws up and running. The EU AI Act’s high-risk system rules are phasing in through 2027. The problem is not that these frameworks exist too early. It is that they may rest on assumptions drawn from places where AI is already more deeply integrated — assumptions about infrastructure, digital literacy, and which harms matter most locally. In research I led on AI-powered misinformation in Kenya and Nigeria, expert panels returned to this point repeatedly: the most effective governance approaches were community-centered and built from local expertise rather than adapted from foreign templates.


Vietnam’s law is attempting something more difficult than simple imitation. It is trying to maintain digital sovereignty while borrowing from the EU’s regulatory architecture. But it is worth asking who will feel this law first. The legislation requires foreign AI providers to establish a local presence in Vietnam and subjects high-risk systems to conformity assessments approved by the Prime Minister. Grace periods give existing systems 12 to 18 months to comply. In practice, these provisions will reach foreign tech companies long before they reach the vendor in Hoi An who prefers cash over a card payment.


That may still be the right strategy. Regulating before harms fully arrive is better than waiting for them to harden into fact. But governance credibility is tested in the gap between who a law names on paper and who it reaches in practice. Vietnam’s AI law will matter not only if it disciplines foreign providers, but if it eventually becomes clear to the people buying fabric in the market who have not had to think about AI at all.

Ethan Ward

Award-winning journalist and product strategist focused on AI governance, algorithmic accountability, and responsible technology. Master of Public Diplomacy (USC). MSc in Human-Computer Interaction (University College Dublin). His work has appeared in USA Today, NPR, Slate, Fast Company, and PBS SoCal. Founding editor of INHERITANCE. Founder, HEATDRAWN.

https://iamethanward.com
Previous
Previous

Courts Decided: Platform Harm Is a Design Problem

Next
Next

The UK Is Expanding Facial Recognition Faster Than It Can Test for Bias