Should AI Regulation Control Innovation?

by David Gleason
a black lock

Generative AI and Regulations

“Virtue is more to be feared than vice, because its excesses are not subject to the regulation of conscience.”Adam Smith

When blockchain first entered the global conversation, it appeared in virtually every journal and newspaper. At the time, several respected former U.S. statesmen published an editorial proposing its use to solve land rights disputes in Latin America. Their premise was noble, since clear land rights are foundational to any free society. But the solution was deeply flawed. A digital ledger cannot erase centuries of corruption.

What struck me was that the authors were not technologists, yet they projected their ideas onto a technology they did not fully understand. Sound familiar? Each time a revolutionary technology emerges, politicians and pundits rush to control it. The short answer to our title question is no, AI regulation should not pre-empt innovation. We need targeted, domain-specific safeguards, not sweeping rules that smother progress.

Key Takeaways

  • Politicians often misunderstand the technologies they try to regulate

  • The artificial intelligence bill of rights offers principles, not practical implementation

  • Bias is not inherently bad in AI, and removing it blindly can cause harm

  • Heavy, one-size-fits-all AI regulation risks stifling innovation and weakening competitiveness

Why Do Politicians Struggle to Regulate Emerging Tech?

History repeats itself. Just as blockchain was misapplied to land rights, AI regulation proposals are now being drafted by people who often lack technical depth. Policymakers see potential risks and reach for simple solutions. But complex systems break under blunt rules. Instead of careful, domain-specific guidance, we get sweeping frameworks with predictable, unintended consequences.

At All In On Data, we work with decision-makers to cut through the noise and focus on what really matters in emerging technologies. Through custom-tailored workshops, we help C-suite leaders and board members stay current on Artificial Intelligence, Generative AI, and Data trends—so you can make smarter, faster decisions.

What Does the Artificial Intelligence Bill of Rights Say?

The White House recently introduced the Blueprint for an AI Bill of Rights, a framework outlining principles like safe and effective systems, protections from algorithmic discrimination, and data privacy. While its intentions are good, the document is non-binding and light on practical detail, offering high-level values but little operational guidance.

When it comes to implementation, our team helps organizations stay agile while still managing risk. See how in our Expertise section.

Is Bias in AI Always Bad?

Ask most people to define machine learning and you’ll get blank stares or contradictions. My simplified response: AI is advanced pattern matching. Like humans, it recognizes and applies patterns. Data will always contain some bias because reality contains bias. Trying to strip it all away can distort reality and degrade performance.

Consider medical diagnostics. If we remove variables or rebalance data in ways that erase relevant differences, we may reduce diagnostic accuracy. In that case, a company following a checklist interpretation of the artificial intelligence bill of rights might drift toward malpractice. The issue is not the existence of bias, it is whether bias is identified, monitored, and managed responsibly by domain experts.

What Are the Risks of Overregulating AI?

Overregulation tends to use blunt instruments for nuanced systems. The following risk–impact matrix summarizes the most common failure modes and better alternatives:

Regulatory move (one-size-fits-all)Likely outcomeBetter path (domain-specific)
Broad “fairness” mandates that suppress key featuresLower accuracy, hidden biases reappear elsewhereLet domain experts define fairness metrics tied to outcomes (e.g., error rates in clinical AI)
Heavy pre-approval and reporting for all modelsCompliance drag, startups exit, slower iterationRisk-tier models by use-case, scale requirements by risk and impact
Strict data minimization regardless of contextStarved models, poor performance on edge casesPurpose-bound data with strong governance, audits, and privacy controls
Blanket bans on model classesInnovation flight to other jurisdictionsSafety cases and post-deployment monitoring rather than bans
Uniform procurement rules across sectorsMisfit solutions, wasted spendSector playbooks aligned to regulations and outcomes (health, finance, public sector)

As Milton Friedman famously quipped:

“If you put the federal government in charge of the Sahara Desert, in 5 years there’d be a shortage of sand.”

Overregulation turns abundance into scarcity. With AI regulation, the stakes are global competitiveness and broad access to improved services.

Should AI Regulation Define Fairness?

AI regulation

Fairness is subjective. What is fair to one stakeholder may be unfair to another. Encoding a single, political definition of fairness into all systems risks breaking models and harming the people they are meant to help.

In medicine, meaningful fairness means measuring performance across populations and minimizing harmful error gaps, not pretending differences do not exist.

Why Domain-Specific Controls Work Better

AI oversight should be designed by the professionals closest to the risks. Doctors should help define guardrails for diagnostic AI. Lawyers should guide legal-advice tools. Financial risk officers should tune trading and credit models.

Bias cannot be eliminated, only managed. Accuracy depends on reflecting reality, not erasing it. Domain-specific controls achieve balance: safer systems, without suffocating innovation.

For a deeper dive into large-model risk thinking, see our related post: Tiger by the Tail: Risks of Large Language Models.

How Does AI Democratize Knowledge?

AI is not new. The ideas have been around for decades. What changed is economics: we finally have the compute and storage to train massive models. These models alarm some policymakers, yet they also hold tremendous potential.

Imagine embedding the expertise of world-class physicians and making it globally available at a fraction of the cost. That is the promise of responsible AI: democratized knowledge, better decisions, broader access.

Start the Conversation, Not the Crackdown

We need thoughtful debates about AI regulation. We should discuss risk, ethics, and impact. But rushing into sweeping, pre-emptive rules while the technology is still evolving risks smothering the very innovation that will deliver the greatest societal benefits. The right question is not whether AI should be regulated, but how and when, calibrated to real-world risk.

a responsible AI future symbolized by calm sand dunes in desert

Building a Responsible AI Future

AI can transform industries, increase productivity, and expand access to expertise. Overregulation at this stage would slow that progress and hand advantage to less cautious competitors. The path forward is clear: encourage innovation, govern by risk, and let domain experts set the most critical controls.

Leaders should invest in literacy about bias and measurement, establish domain-specific review processes, and participate in pragmatic policy conversations so outcomes are workable, not wishful.

If your team needs help drafting a practical governance approach that protects innovation, we can help.

Contact us to build a responsible AI roadmap that balances safety with speed.

And if you’d like to know more about the founders and their experience leading in this space, visit our About page.

FAQ

Q: What is AI regulation?

A: This regulation refers to laws or policies that govern how AI is developed, deployed, and used, ideally tied to risk and real-world outcomes rather than blanket prescriptions.

Q: Why is it controversial?

A: Premature or blunt rules can create unintended consequences, suppress innovation, and weaken competitiveness, while failing to improve safety in the areas that matter most.

Q: What does the Artificial Intelligence Bill of Rights propose?

A: It outlines non-binding principles like safety, non-discrimination, privacy, notice and explanation, and human alternatives. It is a starting point for discussion, not an operational standard. Learn more in the official Blueprint for an AI Bill of Rights.

Q: Should bias in AI always be removed?

A: Not necessarily. In many applications, reflecting real-world patterns is essential for accuracy. The goal is to measure and manage bias responsibly, not erase important signals.