| CIVIC-SCOPE Analysis | |
|---|---|
| Context | Interests |
| Rapid proliferation of AI tools (deepfakes, algorithmic decision-making) outpacing legal frameworks. Risks of nonconsensual imagery, biased automated decisions, and evidence fabrication in courts. | Victims: Need rapid recourse against deepfakes and algorithmic harm. Tech Vendors: Want frictionless deployment and minimal liability; prefer opacity for convenience and less risk of accountability or less monitoring burden. State: Needs to use AI for efficiency but risks eroding trust/rights. Legal System: Faces an existential crisis of evidence. |
| Vision | Incentives |
| A pre-emptive regulatory architecture protecting human dignity and truth. Zero tolerance for nonconsensual synthetic imagery, mandatory "human-in-the-loop" for high-stakes decisions, and watermarked evidence chains. | Abusers: Incentivized by low cost/risk of generating harm (deepfakes). Companies: Incentivized to offload decision responsibility to "black boxes." Public: Incentivized to adopt tools for convenience, often ignoring privacy/verification. |
| Challenges (SCOPE) | |
Structural: Enforcing local laws on global platforms (e.g. Telegram, unmoderated AI sites) that have no physical presence in the Maldives. Capacity: Limited local capacity and knowledge/understanding of AI across the board from lawmakers and regulators to agencies; establishing a credible algorithmic audit regime before the country has a deep pool of technical AI experts. Operational: Retrofitting digital watermarking into thousands of existing, fragmented CCTV and security systems across the country. Political: Balancing the need for strict regulation without stifling local innovation or being labelled "anti-tech." Economic: The cost of compliance (audits, human review) may price local SMEs out of using AI tools, consolidating power with big foreign vendors. |
|
Challenge Score (1 to 5) Budget: 3 | Logistics: 3 | Legislative: 5 | Political Capital: 2 | Execution: 3-4 | Time: 3-4 | Stakeholders: 3-4 | Risk: 4 |
|
The Maldives is about to face a regulatory crisis that moves faster than our institutions can react. For the past two decades, digital technology posed manageable, evolutionary challenges – privacy concerns, data storage, online fraud. These were extensions of existing problems. Artificial intelligence is different. It doesn't merely extend old problems; it creates entirely new categories of harm that our current laws cannot even describe right now, let alone protect us against.
The core issue is a reshaping in the dynamic between the power of enforcement and the power of behaviour worthy of enforcement, with a massive asymmetry toward the latter when backed by AI tools. A single individual with access to open-source generative AI tools can now inflict damage at a scale that previously required the resources of a state intelligence agency or a multinational corporation. One person can generate thousands of nonconsensual intimate images of their classmates or colleagues in minutes. A small team can flood a legal system with thousands of pages of plausible-looking but entirely fabricated legal briefs. An unscrupulous employer can screen ten thousand job applicants in an hour using a biased algorithm that operates as a black box, making discrimination not just efficient but invisible. Our legal system, built around human actors and human-scale actions, has no mechanism to assign responsibility when the harm flows from an automated system whose decisions even its creators cannot fully explain.
The damage is not theoretical; it is already here, both globally and almost certainly in the Maldives. Across the world, women are already facing harassment campaigns fuelled by AI-generated pornography. Job applicants are being filtered out by opaque systems that penalize them for gaps in their resumes or the wrong keywords. Courts are grappling with evidence that looks real but was never captured by a camera. The question is not whether these harms will appear in the Maldives – they almost certainly already have – but whether we will have a framework ready to meet them when they become visible. The traditional approach of waiting for problems to emerge and then drafting laws in response is catastrophically inadequate here. By the time we recognize a pattern of harm, thousands of citizens may have already suffered irreversible damage to their reputations, careers, or legal rights. We need a different posture: not reactive regulation, but pre-emptive architecture. We make this point upfront because the temptation will be to wait, to study the issue more, to see how other countries handle it first. But waiting is itself a choice with consequences, and those consequences fall most heavily on the people least able to defend themselves.
The need for immediate legal attention
Three areas need early legal attention. By early, we mean starting the legislative process now, rather than in two or three years when the problems become undeniable. The first is synthetic media and deepfakes: there should be clear duties on those who create or distribute synthetic audio and video to label them, and clear penalties for malicious use in politics, fraud or harassment. The second is public decision-making: whenever an AI system is used to support or make decisions that affect people’s rights or entitlements, there must be a human-readable explanation, a named public authority that remains accountable and a right of appeal to a human reviewer. The third is procurement and audit: public bodies should only buy or deploy AI systems that meet basic standards of transparency, explainability and security, and there should be a regular process of independent audit.
The OECD developed AI principles in 2019, but with the pace in development of AI in recent years, the era before ChatGPT might as well be a century ago. The technology landscape has shifted so much that while still useful, these principles are very incomplete That said, they can provide a starting point. The OECD’s 2019 principles on trustworthy AI stipulate that AI actors must respect the rule of law, human rights, diversity and fairness, including non‑discrimination and privacy. They call for human agency and oversight, meaning that people should be able to challenge AI‑generated decisions and systems should include mechanisms for human control. The principles also demand transparency and explainability – AI developers should disclose data sources, model logic and potential impacts to affected stakeholders – and ensure robustness, security and accountability13Footnote reference. The EU AI Act, adopted in 2024, is the world’s first comprehensive AI regulatory framework. It applies a risk‑based approach: AI systems posing unacceptable risks, such as social scoring or manipulative behaviour, are banned entirely. High‑risk systems (e.g. in critical infrastructure, education, employment and law enforcement) must undergo risk assessment, use high‑quality datasets, provide transparency, and ensure human oversight and robustness14Footnote reference. By establishing obligations and penalties, the Act aims to ensure safe, ethical, and trustworthy AI while supporting innovation. We can learn from this framework, though we will need to adapt it to our specific context and capacity constraints.
Zero tolerance for nonconsensual imagery
The most visceral and immediate threat posed by generative AI is its potential for malicious use against individuals, particularly women. We are witnessing the democratization of sexual violence through digital means. Tools that can strip clothes from photos or graft faces onto pornographic videos are now widely available, often for free or a nominal fee. This creates a world where simply existing in public – having a social media profile, appearing in a school photo – carries the risk of being turned into pornography against one's will. European estimates suggest that 98% of deepfakes are pornographic, and the number of such videos doubled every six months leading up to 202415Footnote reference. Current laws on harassment and defamation are woefully insufficient for this reality, having been built for a world where evidence was scarce and hard to produce. Today, an abuser can generate unlimited, photorealistic "proof" of acts that never occurred. By the time anyone realizes what happened, the images are already circulating. The harm is not the images alone, but also the psychological terror of knowing it exists and the reputational ruin that follows. For example, teenage boy with access to free software can victimize dozens of female classmates in an afternoon – are schools prepared or equipped to effectively deal with such a situation? Are the police? Is the general public?
We propose a zero-tolerance legal framework to address this head-on. The legal response needs to be unambiguous and severe. We could establish that the creation, possession, or distribution of nonconsensual intimate imagery (whether real or synthetic) is a serious criminal offense. The penalties need to be substantial enough to create real deterrence – we are talking prison time, not fines. The burden of proof should not fall on the victim to prove the image is synthetic, but on the accused to explain why they created or possessed such material without consent. This reversal is critical because victims often cannot access the original material or technical expertise needed to prove synthesis. There is a need to bring the hammer down, so to speak, on the creation of involuntary pornography and any nonconsensual AI-generated image or video of a person. This explicit national ban would cover the broader issues of revenge pornography and the nonconsensual release of private data (doxxing), treating them as severe violations of bodily autonomy.
These enforcement policies must be designed for fast and comprehensive responses, since AI allows for dissemination and mass reproduction of harm or collective targeting by AI bots operated even by one individual at scale in far shorter timeframes and with far less resources than before. A legal process that takes months is useless to a victim whose images are going viral today. For policies, we can look to Australia's eSafety Commissioner, which has the power to issue removal notices to platforms and impose civil penalties on individuals who post non-consensual imagery16Footnote reference. This sends a clear signal: your right to use AI ends where another person’s dignity begins.
Data sovereignty and the right to be forgotten
In the AI era, every photo, comment, and transaction we generate feeds the models that will eventually profile, target, and simulate us. The current notice and consent model, where we click agree on long terms of service without true understanding of what that entails or how that could change, is broken. It assumes that if we consented once, we consented forever, for any use. This is manifestly unfair, particularly as the uses to which data can be put evolve in ways that were unimaginable when consent was originally given.
We could shift the power balance back to the individual by establishing a set of fundamental data rights that go beyond the European GDPR. While GDPR focuses on data protection, we would focus on data sovereignty (the idea that individuals should have meaningful control over their personal data, not just notification about how it is being used). This means establishing a robust right to be forgotten, allowing citizens to demand the erasure of personal data held by companies when its retention is no longer justified17Footnote reference. If a person wants to scrub their location history from an e-commerce app or their photos from a social app, that would be a simple, enforceable right, not a negotiation or a favour the company might grant. The earlier-mentioned capabilities of generative AI to create deepfakes is part of the considerations behind needing this kind of change. At the time people uploaded their pictures online, they couldn’t have foreseen the possibility of a bad actor finding these images for nonconsensual use. People deserve the right to ensure that their choices for privacy in response to the proliferation of generative AI is actually upheld. Similar applies to other possible uses of the scale of generative AI, such as the ability for hackers to research and information about individuals that might otherwise require a financially impractical amount of time and effort to identify as a means to hack into their personal information.
The forever memory of AI models creates a particular challenge. Once a model is trained on personal data, that data is often baked into its neural weights, impossible to extract without retraining the entire model, which is extremely expensive and technically complex – enough so that most major AI companies which have invested billions into training effective models would never do so. There is likely some information about us out there that is baked into the parameters of models, including potentially incorrect or mistaken data which can’t be directly corrected once found the way a false website on a search result could be. To try prevent this for the future though, we could establish national data privacy rules that require companies to prove they can actually delete user data from their systems. Of course, as a small country, we do recognize that the Maldives has insufficient power to actually enforce this, but a country and legal apparatus is likelier to get an audience from major corporation leadership or get them to at least make whichever small changes they could do to reduce risks where even any small changes is better than nothing.
We would pair this with a clear right for people whose private data has been released to file for immediate takedown, ensuring that the harm of a data breach is not compounded by the permanent availability of that data to AI scrapers and bad actors. This is about recognizing that once your data is out there being used to train models, it becomes increasingly difficult to claw back any meaningful control over your digital identity. The right to be forgotten needs teeth for enforcement. Once again, we are aware that a small country making up a tiny fraction of tech giants’ markets have very little power, but setting out a baseline and establishing standards could be a first step toward encouraging other countries to follow that lead. To accelerate these standards, mutual agreements and statements on rights against AI use could be settled on with neighbouring countries that do have the population and market power to inform policies. Establishing clear legal standards can create the basis for collective action with other small states and for joining regional or international enforcement mechanisms. The worst option is to have no standards at all, which leaves citizens with no recourse whatsoever.
Truth and evidence: safeguarding the courts
The foundation of our justice system is the assumption that evidence reflects reality. AI destroys this assumption. Generative tools can now create video footage of crimes that were never committed or create a massive paper trail of documents that were never written. We are entering an era where seeing is believing becomes a liability rather than wisdom. The threat is not only that fake evidence will be introduced, although this threat alone is serious enough to deserve major action. There is also what researchers call a liar's dividend: where real evidence can be dismissed as fake because fakes are so common. A defendant caught on video committing a crime can now claim the video is a deepfake, and absent clear proof of authenticity, that claim creates reasonable doubt. This undermines the very concept of objective truth in legal proceedings.
To protect our courts and public trust, we must guarantee the authenticity of evidence. This requires proactive technological solution including, where possible, digital watermarking. We could mandate digital watermarking for all government-operated CCTV and recording devices. This means every frame of video recorded by the police or public surveillance systems carries a cryptographic signature proving when and where it was captured, and that it has not been altered. This allows the state to prove, explicitly, that its evidence is authentic if needed. Similarly, it allows defendants who are being falsely accused to have a definitive way to identify that footage of them is fabricated, or at least to gauge the strength of the evidence, better than having defendants and prosecution bring in duelling expert forensic video analysts or asking judges and juries to go off their gut judgement.
We strongly advocate for this same watermarking standard to be adopted in private CCTV and security systems. The goal is to create a "chain of trust" for digital media. We align with emerging global standards like the Coalition for Content Provenance and Authenticity (C2PA)18Footnote reference, which provides an open technical standard for certifying the source and history of media content. Camera manufacturers like Leica and Sony are already integrating C2PA standards into their hardware, creating a verified provenance trail from the moment of capture. By adopting these standards early, the Maldives ensures its legal system remains robust against the wave of synthetic evidence already crashing into courts worldwide. A public, accessible verification process would be published so that any judge, lawyer, or investigator can independently verify the integrity of a file. This wouldn’t be a perfect solution: watermarks can potentially be stripped, though that itself becomes evidence of tampering; increasingly advanced fabrication could mimic watermarks; watermark replication code might eventually be replicated and disseminated online, and so on. But there’s no certain fix now that the Pandora’s box of AI has been opened. All we can do is ensure that there’s a much higher bar for fake evidence to be introduced into legal proceedings. Without this kind of infrastructure, we risk a future where courts cannot rely on video or audio evidence at all because the possibility of forgery is too high. That would be catastrophic for justice, as many crimes are only provable through recordings and the possibility that video evidence of crimes was AI-generated creates reasonable doubt even for criminals caught outright. We need to act now to preserve the evidentiary value of digital media before the problem becomes unmanageable.
Accountability in the black box
A critical domain is the use of AI systems to make or support decisions that affect people's rights, opportunities, or entitlements. This is already happening across both public and private sectors (loan approvals, job application screening, social housing eligibility, university admissions). We cannot allow institutions to hide behind an algorithmic "black box." Across the world, we see AI systems making life-altering decisions – who gets hired, who gets a loan, who gets parole – based on logic that is hidden from the people it affects. An applicant is rejected, a claim is denied, but there is no human who can explain why because the algorithm made the decision based on patterns in data that even the developers cannot fully articulate. When these systems fail, they fail at scale, and often with bias. In the US, the case of Mobley v. Workday (2024) highlighted how an AI hiring tool allegedly screened out applicants based on race, age, and disability, operating as a gatekeeper with no human oversight19Footnote reference,20Footnote reference,21Footnote reference,22Footnote reference. Our framework sets clear rules that name who is responsible when an AI system harms someone. If an AI system unfairly fires or rejects a person from a job, there must be a clear path to remedy and a human party who is held liable. The deploying institution – the company or agency that bought and used the tool – is responsible for its outputs. This is incompatible with the principles behind legal protections of rights. People have a right to understand why they were treated a certain way by their government or by institutions that affect their life chances. They have a right to challenge decisions and seek redress when those decisions are wrong. "The algorithm did it" is not a defence.
When an AI system is involved, those rights evaporate unless we build them in deliberately. We could establish a clear legal principle: whenever an AI system is used to make or substantially contribute to a decision that affects a person's rights, opportunities, or access to services, there must be a human-readable explanation of the decision logic, a named official who remains accountable for the outcome, and a clear pathway for appeal to a human reviewer who has the authority to override the system. This applies whether the system is being used by government agencies, banks, employers, or educational institutions. It cannot be sufficient to say the algorithm said no. The explanation needs to identify which factors were decisive – was it credit history? employment gaps? lack of certain keywords in a resume? – present this in plain language, and allow the person to understand what they would need to change to get a different outcome next time. This may require keeping human-readable decision logs alongside algorithmic outputs, which adds cost and complexity, but this is the price of using powerful tools in high-stakes contexts. The accountability requirement is equally important. There must always be a named human official who is responsible for the decision, even if they relied on algorithmic input. If an algorithm wrongly denies someone housing assistance and that person becomes homeless as a result, there needs to be someone who can be held accountable, someone who can be sued or disciplined. Without this, we create a accountability gap where everyone can blame the algorithm and no one takes responsibility for outcomes.
This applies to professional negligence as well. We are already seeing cases where lawyers submit legal briefs written by AI that contain "hallucinations" – citations to non-existent court cases. In Mata v. Avianca (2023), a New York lawyer was sanctioned for submitting a brief filled with fake cases generated by ChatGPT23Footnote reference. Our rules are explicit: if an AI-generated legal brief contains mistakes that lead to bad outcomes, the human professional and the deploying institution remain fully responsible. We also propose to ban fully automated adverse decisions in high-stakes domains. In areas like criminal justice, essential social services, and employment termination, a human being must knowingly sign off on the decision. This "human in the loop" requirement cannot be a rubber stamp; the human must have the authority and the information to override the AI.
Of course, this does raise practical challenges. What counts as a high-stakes decision? How much information does the human need to meaningfully review an algorithmic recommendation? How do we verify that humans are actually reviewing rather than rubber-stamping? These are not easy questions, but they need to be worked through in legislation and regulation rather than being left to individual companies to figure out. We have heard from government agencies and private companies that they want clearer guidance on when they need human review and what that review should entail, so this is not just about imposing burdens but providing clarity that helps everyone operate responsibly.
Standards for public sector AI procurement
Public bodies should only procure or deploy AI systems that meet basic standards of transparency, explainability, and security. Before any AI system is deployed in a public-facing role (making or supporting decisions about benefits, permits, services, enforcement), it should undergo an independent audit that assesses bias (does it systematically disadvantage certain groups?), robustness (how often does it make errors, and what kinds?), and security (can it be manipulated or hacked?).
The results of these audits should be public, not classified as procurement confidentiality. Citizens have a right to know what systems are being used to make decisions about their lives and how well those systems actually perform. This transparency also creates incentives for vendors to improve their systems (since poor audit results will hurt their reputation and sales) and for procuring agencies to choose better tools. There should also be a regular process of ongoing audit, not just a one-time assessment before deployment. AI systems can drift over time (their performance degrades, they develop new biases as the underlying data changes, updates introduce new bugs). A system that was fair and accurate when deployed might become unfair and unreliable two years later without anyone noticing if there is no systematic monitoring. We need to build ongoing evaluation and accountability into the procurement contracts themselves, with clear triggers for suspension or termination if a system fails to meet standards.
This will add costs and complexity to government IT procurement, which is already challenged (as discussed in the digitalization section). But the alternative is worse: deploying powerful systems without understanding how they work or whether they are fair, then discovering failures only after real harm has occurred. We have seen this pattern repeatedly in other countries (criminal justice systems that amplify racial bias, benefits systems that wrongly deny people assistance, border control systems that flag innocent people for additional scrutiny based on irrelevant factors). Learning from those failures is cheaper than repeating them.
We should be realistic about implementation. Building the capacity to conduct meaningful AI audits will take time and expertise we do not currently have. In the short term, we may need to rely on external expertise (international consultants, partnerships with universities, regional cooperation with other small states facing similar challenges). But we should plan to build domestic capacity over time, training Maldivian auditors and technical reviewers so we are not permanently dependent on outside expertise.
Public resilience: media literacy as survival
Rules alone cannot protect us from everything. The ultimate defence against AI-driven disinformation and fraud is a resilient public that knows how to spot manipulation and verify information. We are moving into a world where phone scams use the cloned voice of a loved one to demand emergency funds, and "news" sites are populated by AI-generated outrage designed to drive engagement or cause societal chaos. The FTC's Voice Cloning Challenge in the US highlights the rapid sophistication of these scams, where a few seconds of audio is enough to clone a voice24FTC (Federal Trade Commission) reports on voice cloning fraud [www.ftc.gov/news-events/contests/voice-cloning-challenge](https://www.ftc.gov/news-events/contests/voice-cloning-challenge). The scam works because people trust their own judgment that they can surely tell the difference between their loved ones and a replica when they lack knowledge of just how sophisticated technology has gotten now, and the emotional manipulation from worrying that someone you love is in danger short-circuits critical thinking.
We could treat media literacy not as a nice to have educational add-on, but as a civic survival skill. This means running broad public campaigns that teach people how to spot AI-generated photos and videos (telltale signs like unnatural lighting, weird hands, inconsistent backgrounds), how to be sceptical of unexpected communications even if they appear to come from someone you know (establish verification procedures with family members, use code words, call back on a known number before acting), and how to verify information before sharing it (check multiple sources, look for original reporting, be suspicious of content designed to make you angry). We can integrate these skills into the national curriculum starting from early secondary school, teaching students to question what they see online, to check sources, to understand how AI-generated content works and why it is created, to question the likely motives and incentives for a given piece of content to judge whether someone would have an incentive to generate it through AI for commercial manipulation or political influence.
This is not about making everyone a technical expert, but about developing a general scepticism and verification habit that makes people harder to fool. Catching 100% of cases may be unfeasible when a lot of the most advanced cases nowadays are highly sophisticated technology, but a massive share of AI-generated harmful content is likely to be created by people who are not the most technically skilled users with the most powerful technology. Catching that 70% or so at the bottom of the barrel still prevents a massive amount of harm. Similarly, reducing the rapid spread of harmful AI-generated content is also useful. In the same way that we teach children to look both ways before crossing the street, we need to teach everyone to verify before trusting as a heuristic for the digital age. The goal is not perfect immunity to AI manipulation, which is impossible, but raising the cost and difficulty of manipulation enough that most scams and disinformation campaigns fail to gain traction. If even 40% of people routinely verify surprising claims before believing or sharing them, that is enough to slow the spread of false information dramatically. This also requires working with community leaders, religious leaders, educators, and other trusted voices to spread these messages. People are more likely to listen to someone they already trust than to a government PSA, so building a coalition of voices advocating for verification and scepticism is important.
Building capacity where none exists
Perhaps the biggest challenge around AI policy is that we currently do not have the local capacity to implement these frameworks properly. We do not have many legal experts who understand AI well enough to draft precise legislation that covers edge cases and anticipates how the technology will evolve. We do not have many technical experts who can audit algorithms to assess bias and robustness. We do not have judges and government lawyers who have experience adjudicating these kinds of cases (for example, what counts as algorithmic discrimination? how do you prove intent when an algorithm made the decision?). This is not unique to the Maldives. Most small countries face similar capacity constraints for complex technical issues, and even larger advanced countries usually struggle to keep up with governance aspects of new technology. In particular, there is likely to be little overlap between people who have strong knowledge of AI and those who are most able to create policy around AI, such as elected lawmakers or career politicians. We cannot implement a sophisticated AI governance framework if we do not have people who understand what they are regulating and enforcing.
We could approach this through a combination of targeted capacity building and strategic partnerships. For legal expertise, we could work with international organizations (the OECD, the ITU, regional bodies like SAARC) to access model legislation and technical assistance in adapting it to our context. Several organizations have developed template AI laws that small countries can customize, which is much faster than starting from scratch. For technical audit capacity, we could explore partnerships with universities or research institutes in South Asia that have relevant expertise in computer science and AI ethics, creating an arrangement where they provide audit services in the short term while training Maldivian counterparts for the longer term. This could be structured as a multi-year partnership where the first year is primarily done by the external partner, the second year is co-supervised, and by year three Maldivian auditors are taking the lead. For judicial capacity, we could develop training programs for judges, prosecutors, and defence lawyers on AI-related cases, using case studies and simulations from other jurisdictions. This does not need to make everyone an expert, but it needs to give legal professionals enough understanding to ask the right questions and evaluate expert testimony. We have heard from lawyers and judges that they feel unprepared for cases involving digital evidence and algorithmic decision-making, but there has been little systematic effort to address this gap through training or professional development.
We should also be realistic about what we can and cannot do immediately. Some elements of this framework, such as laws around nonconsensual AI-generated images and deepfake pornography, can be enacted relatively quickly because they build on existing legal concepts around consent, privacy, and harm. We already have laws against harassment and defamation; we are just extending them to cover a new form of harm. Other elements (like the algorithmic audit regime) will take longer to implement properly because they require building new institutions and expertise from scratch. The key is to start the process now and build capacity in parallel with regulatory development, rather than waiting until we have perfect capacity before acting. We could start with the easier pieces, banning the most egregious harms, and establishing basic principles around transparency and accountability, while building the infrastructure and expertise needed for more sophisticated enforcement over time.
Why this matters for sovereignty and state capacity
We make this point about AI governance to emphasize that this is not only about protecting individuals, but also about national sovereignty and institutional capacity. If we do not establish our own rules around AI use, we will end up defaulting to whatever rules are embedded in the systems we import. Tech companies will make choices about fairness, transparency, and accountability, and we will inherit those choices whether they align with our values or not. By establishing clear standards upfront, we create bargaining power. We can tell vendors that if they want to sell systems to the Maldivian government or operate in the Maldivian market, those systems need to meet our transparency and audit requirements. Some vendors will refuse, particularly if our market is too small to justify customization. But with the intense competition in the AI scene and the wide variety of international labs creating near state-of-the-art technology, and even the incredibly powerful near state-of-the-art open models that can be adapted by experts to fit our governance standards, having these requirements won’t shut us off from being able to access top-end AI tools. If we wait until after deployment to discover problems, we lose that leverage and become locked into systems that do not serve us well, and extracting ourselves becomes prohibitively expensive and complicated.
This is also about building state capacity for the long term. AI is not a temporary phenomenon or a passing trend. It is becoming embedded in nearly every domain of economic and social life (from agriculture to healthcare to education to finance). A state that cannot govern AI effectively is a state that cannot govern effectively in the 21st century. We need to build the legal frameworks, technical expertise, and institutional arrangements now, while we still have some room to manoeuvre, rather than waiting until we are in crisis mode and making reactive decisions under pressure.
There is also a justice dimension to this. Right now, the harms from AI (biased hiring algorithms, nonconsensual imagery, manipulated evidence) fall disproportionately on those who are already vulnerable (women, minorities, people with limited resources to fight back legally). Without clear rules and enforcement, these harms will accumulate and worsen. By acting now to establish protections, we are making a statement about whose interests matter and who deserves protection. This is fundamentally about what kind of society we want to be, not just what technical standards we want to adopt. The alternative is a future where our legal system cannot keep up with the pace of technological change, where our citizens have no recourse against algorithmic harms, and where our institutions lose legitimacy because they cannot explain or justify their own decisions. That trajectory is avoidable, but only if we start taking it seriously now rather than treating it as a distant future problem. The harms are already here; the question is whether we will respond or just let them accumulate until they become overwhelming.