(+84) 9 61 67 55 66
info@vietanlaw.vn

Update Decree 142/2026/ND-CP guiding the Artificial Intelligence Law in Vietnam

As the Law on Artificial Intelligence in Vietnam No. 134/2025/QH15 officially took effect on March 1, 2026, tech companies and startups face a radically new regulatory landscape in Vietnam. This legislation marks the country’s first dedicated act establishing a foundational legal framework for the research, development, provision, and use of artificial intelligence across all sectors. To detail the implementation of this legislation, the Government issued Decree 142/2026/ND-CP on April 30, 2026, which took effect on May 1, 2026. This article by Viet An Law summarizes the update Decree 142/2026/ND-CP guiding the Law on Artificial Intelligence in Vietnam, helping businesses navigate compliance mandates, risk classifications, and testing environments effectively.

Legal application considerations

Companies operating in the artificial intelligence sector must note new intersecting legal frameworks that apply simultaneously. For instance, Decree 353/2025/ND-CP (effective January 1, 2026) regulates the general digital technology sandbox under the guidance of the Law on Digital Technology Industry 2025. Because Decree 142/2026/ND-CP provides a distinct AI sandbox mechanism, an AI startup may fall under both regulatory regimes depending on the scope of its product.

Scope of application

Article 2 of Decree 142/2026/ND-CP stipulates that this regulation applies to:

  • Providers, developers, deployers, users of an AI system, and affected persons.
  • Foreign organizations and individuals participating in artificial intelligence activities within Vietnam, meaning that foreign startups operating in Vietnam also fall within the regulated scope.

Compliance transition periods

  • For any AI system put into operation before March 1, 2026: The system must comply with the regulations within a designated transition period.
  • Within 60 days from the effective date of the Decree (i.e., before June 30, 2026), transitioning organizations and enterprises must submit a notice on the one-stop portal along with a transition plan or compliance scheme; this notification does not trigger any prior administrative approval procedures.

Update Decree 142/2026/ND-CP guiding the Artificial Intelligence Law in Vietnam

Update Decree 142/2026/ND-CP guiding the Artificial Intelligence Law in Vietnam

Establishment of the one-stop portal for artificial intelligence and the national database on AI systems

A notable provision of Decree 142/2026/ND-CP is the inaugural establishment of a one-stop portal for artificial intelligence and a national database on AI systems, which the Ministry of Science and Technology manages and operates. Specifically:

Operation of the one-stop portal for AI

The Ministry of Science and Technology manages this portal to execute the following functions:

  • Receive risk classification results, conformity assessments, and issue identification codes for the AI system.
  • Receive incident reports, periodic reports, and support automated risk classification.
  • Publish the list of AI systems, violation handling results, technical regulations, and support mechanisms.
  • Receive guidance requests and monitor artificial intelligence usage within state agencies.

Operation of the national database on AI

Data operation must adhere to the following conditions:

  • Lawful compliance combined with administrative simplification.
  • Exceptions: AI systems serving the internal operations of Party, political, national defense, and security agencies are exempt from the obligation to publicize information on this portal.

Mandatory classification of AI systems into three risk levels before deployment

Under Clause 1, Article 9 of the Artificial Intelligence Law 2025, the risk level of an AI system is divided into three tiers: high risk, medium risk, and low risk. Article 5 of Decree 142/2026/ND-CP explicitly requires that classification based on risk levels occur before putting the system into operation.

Two methods apply: (i) self-assessment or (ii) assessment by a conformity assessment body. For high-risk systems, the management agency may require an independent assessment.

Accordingly:

  • Providers: Must classify the AI system (not applicable to foundational AI models) before putting it into operation. This serves as a crucial note for startups building AI applications versus foundation models.
  • Deployers: Must coordinate to review and reclassify if any changes in functions or intended use generate new risks.

🡪 startups integrating third-party APIs (such as OpenAI, Claude, or Gemini) must pay attention: integration can alter the risk level and trigger reclassification obligations.

  • Conformity assessment for high-risk systems: Both providers and deployers must conduct conformity assessments in strict compliance with the Artificial Intelligence Law.
  • Regulations for AI-integrated products: Businesses must simultaneously comply with specialized laws (product standards and technical regulations) and meet the specific risk management requirements designated for the artificial intelligence component.
  • The Ministry of Science and Technology provides an electronic support tool for self-assessment and classification based on prescribed criteria; using this tool serves a supportive function, is not mandatory, and does not trigger administrative approval procedures.

Enterprises should note the compliance deadlines of Decree 142/2026/ND-CP (mentioned earlier in this article) to fulfill these obligations completely.

Risk level classification criteria for artificial intelligence systems

A significant new point is that incorrect classification carries legal liability. To provide detailed guidance, Articles 6 through 12 of Decree 142/2026/ND-CP define how to determine the risk levels of an artificial intelligence system as follows:

High risk

An AI system is classified as high risk if it falls under the List of High-Risk AI Systems issued by the Prime Minister, including:

  • The list of AI systems identified as high risk;
  • The list of high-risk AI systems requiring conformity certification prior to use.

An AI system is classified as high risk when meeting one or more of the following criteria:

  • Impact level: The potential to cause damage to life, health, property, human rights, national interests, public interests, or national security; the degree of system automation; the level of support for final decision-making; and the capacity for human supervision and intervention in executing actions;
  • Field of use: Deployment in essential sectors or fields directly related to public interests;
  • User scope and impact scale: The range of users, the scale of the affected population, or the degree of connection to critical technical infrastructure systems.

Medium risk

An artificial intelligence system falls into the medium-risk category when it simultaneously meets the following conditions:

  • It does not belong to the List of High-Risk AI Systems issued by the Prime Minister;
  • It possesses the capacity to cause confusion, impact, or manipulate users because the user cannot recognize that the interacting entity is an AI system or that the content is system-generated.

An AI system is classified as medium risk if it does not fall under the above circumstances and belongs to the cases specified in Article 9 of this Decree.

It does not fall under exceptions such as: Systems that only support technical editing, do not create new content, and do not alter the identity of the subject; systems that do not interact with or provide services or content directly to the public, including cases where they do not release content to the public through third parties or intermediary platforms;…

Low risk

A low-risk artificial intelligence system does not fall under the medium or high-risk categories.

Dossier preparation and notification to the Ministry of Science and Technology for medium or high-risk AI systems

Unlike low-risk systems, medium or high-risk AI systems face strict management. Accordingly, the operator of these systems must prepare a dossier and notify the Ministry of Science and Technology. Specifically, Article 12 of Decree 142/2026/ND-CP requires:

Dossier preparation and notification to the Ministry of Science and Technology for medium or high-risk AI systems

  • Notification of Medium or high-risk AI systems
    • Receiving authority: Ministry of Science and Technology.
    • Form: The one-stop portal for artificial intelligence.
    • Timeline: Before putting the system into operation.
  • Notification of Low-risk systems
    • The law encourages organizations and individuals to publicize basic system information to enhance transparency.

Note: Providers of high-risk AI systems must conduct a conformity assessment before putting the system into operation or whenever significant changes occur during use.

Mandatory AI labeling for content simulating real human voices and images from May 1, 2026

Mandatory labeling cases

Deployers must apply easily recognizable labels to audio, images, or videos generated or edited by an AI system in the two cases prescribed in Clause 2, Article 18 of Decree 142/2026/ND-CP, including:

  • Simulating or mimicking the appearance or voice of a real person;
  • Recreating factual events to distinguish them from authentic content, unless otherwise provided by law.

Four exemptions from labeling

Deployers are exempt from the display labeling obligation in the following four cases:

  • Technical editing: Improving audio, image, or video quality without altering the nature or context.
  • Text processing: Supporting error correction, summarization, or translation without distorting the original content.
  • Internal use: Circulating solely within an agency or organization without public release, which is particularly critical for startups building internal AI tools.
  • Research and testing: Conducting activities in a controlled environment without market provision.

Additionally, deployers must provide clear notification when publicly providing content generated or edited by an AI system that could cause confusion regarding the authenticity of events, characters, or the content’s origin.

Obligation to report severe AI system incidents within 72 hours

Article 19 of Decree 142/2026/ND-CP establishes a mechanism for reporting and handling severe incidents involving an AI system. Accordingly:

Circumstances considered severe AI system incidents

A severe incident of an AI system refers to an event occurring during the system’s operation that causes one of the following consequences:

  • Loss of life or severe harm to human health;
  • Significant property damage or severe disruption to the operations of an organization;
  • Serious infringement of human rights, or the lawful rights and interests of agencies, organizations, or individuals;
  • Severe disruption to the provision of public services or essential services as prescribed by law, or impacts on national security, social order, and safety.

Reporting obligations upon the occurrence of a severe incident

  • Reporting subject: The provider or the deployer (the deployer assumes reporting duties if the provider is unreachable).
  • Reporting format: Preliminary report using the prescribed form via the one-stop portal for AI.
  • Preliminary reporting timeline (from incident confirmation):
    • Emergency or loss of control incidents: Within 72 hours;
    • Other severe incidents: Within 05 working days.
  • Official reporting timeline: Submit the remediation results within 15 days of submitting the preliminary report.

Note: Entities must confirm the incident immediately upon gathering sufficient initial information, without waiting for a complete investigation. The act of reporting does not default to an admission of fault or legal liability.

Conditions for participating in the AI Sandbox as a critical lever for startups

To guide Article 21 of the Artificial Intelligence Law 2025, Decree 142/2026/ND-CP dedicates Chapter IV to stipulating the principles, tier classification, authority, approval process, conditions, and dossiers for participating in the testing mechanism (Sandbox).

Significance: The sandbox mechanism allows a novel AI system to undergo testing in a controlled-risk environment, granting exemptions or reductions in compliance obligations based on testing outcomes.

Testing participation conditions

Specific regulations apply to small and medium-sized enterprises registering to participate when they meet the following conditions:

  • Possessing an AI system or solution proposed for testing that incorporates innovative elements, applies new technology, or utilizes a new deployment model;
  • Holding a testing scheme that clearly defines the objectives, scope, duration, participating subjects, and risk control measures appropriate to the testing tier;
  • Implementing measures to protect the lawful rights and interests of organizations and individuals impacted during the testing process;
  • In cases where the AI system directly threatens human life and health or poses a risk of large-scale property damage, the participating organization or individual must hold civil liability insurance or an equivalent financial guarantee matching the testing scope.

Sandbox participation dossier

Accordingly, organizations and individuals must submit one dossier set via electronic methods through the National Public Service Portal to:

  • Provincial People’s Committees: For confirming tier 1 and tier 2 systems deployed within a single province.
  • Ministries and ministerial-level agencies: For confirming tier 1 and tier 2 systems deployed across two or more provinces or by units subordinate to the Ministry.
  • Ministry of Public Security: For confirming tier 3 systems.

For valid dossiers, the competent authority must organize an appraisal and issue a confirmation certificate.

Policy advantages for IT enterprises in Vietnam

The Artificial Intelligence Law establishes a foundation of incentive and support policies acting as benefits to stimulate startup business operations in Vietnam:

  • High incentives: Facilitated access to computing infrastructure, data, and testing environments (Article 20).
  • Investment by the National AI Development Fund: An off-budget financial fund offering flexible mechanisms and risk tolerance in innovation. It prioritizes investing in AI infrastructure, human resource training, core technology, and supporting innovative startups (Article 22).
  • Cost support: Financial assistance for conformity assessments, self-assessment tools, and access rights to shared data. This serves as a specific support policy for startups and small-to-medium enterprises (Article 25).
  • Priority in Sandbox participation: Technology enterprises, small and medium-sized enterprises, innovative startups, science and technology organizations, and higher education institutions participating in developing, providing, or applying artificial intelligence receive priority in the implementation of support policies (Decree 142/2026/ND-CP).
  • Public bidding opportunities: Prioritized use of AI products and services in IT application tasks, digital transformation projects, digital government development, digital economy initiatives, and public service provision; encouraged use of innovative ordering and procurement models.

Understanding the update Decree 142/2026/ND-CP guiding the Artificial Intelligence Law in Vietnam ensures your business remains compliant and highly competitive. To receive consultation on applying these new regulations in IT corporate governance, legal advice on artificial intelligence in Vietnam, or AI dispute resolution, please contact Viet An Law for timely support.

Fast & Reliable Legal Assistance
Fill out the form below and get connected with a lawyer quickly.

    Related Acticle

    Penalties for Unregistered Foreign Loans in Vietnam: Legal Guide (2026)

    Penalties for Unregistered Foreign Loans in Vietnam: Legal Guide (2026)

    Penalties for unregistered foreign loans in Vietnam explained: 40-60M VND fines for organizations. Complete 2026 legal guide covering registration requirements and compliance solutions.
    Fast Business Registration Changes in Binh Tien Ward, HCMC

    Fast Business Registration Changes in Binh Tien Ward, HCMC

    Fast business registration changes in Binh Tien Ward, HCMC made easy. Expert corporate legal advisory ensures accurate dossiers and quick processing. Contact us today!
    Conditions for Gaining Child Custody After Divorce in Vietnam

    Conditions for Gaining Child Custody After Divorce in Vietnam

    Child custody after divorce in Vietnam explained. Learn the legal conditions for gaining custody, court criteria, parental rights, and how to increase your chances of custody.
    Business Registration Application vs. ERC in Vietnam

    Business Registration Application vs. ERC in Vietnam

    Business Registration Application vs ERC in Vietnam explained. Complete guide covering forms, content differences, issuance procedures, and legal requirements for enterprises.
    Amended Law on Health Insurance in Vietnam from 2025

    Amended Law on Health Insurance in Vietnam from 2025

    Amended Law on Health Insurance in Vietnam takes effect July 1, 2025. Key changes: home treatment coverage, flexible payments, expanded benefits, and new registration rules.

    CONTACT VIET AN LAW

    In Hanoi: (+84) 9 61 67 55 66
    (Zalo, Viber, Whatsapp, Wechat)

    WhatsApp Chat

    whatsapp-1

    In Hochiminh: (+84) 9 61 67 55 66
    (Zalo, Viber, Whatsapp, Wechat)

    WhatsApp Chat

    whatsapp-1

    ASSOCIATE MEMBERSHIP