We Need Stronger Safeguards from Artificial Intelligence
BU law professor Woodrow Hartzog argues that current AI policies and oversights are far too weak, calling on Congress to move beyond what he calls “half measures”
Woodrow Hartzog, a Boston University School of Law professor of law, says that lawmakers must resist the idea that growing AI power is inevitable: “When lawmakers go straight to putting up guardrails, they fail to ask the existential question about whether particular AI systems should exist at all.” Photo by Robert Way/iStock
Artificial intelligence tools are now common enough for lawmakers to sense their power and their peril. People are exposed to AI systems at work, at play, and at rest. These tools are being used to shape what we see, what we can choose, and what opportunities we have in an increasingly online world.
To keep society safe as more and more AI power is deployed, we will need stronger rules—and regulations that don’t currently exist.
But it’s not clear how lawmakers should proceed. Up to this point, AI policy has largely been made up of industry-led approaches like encouraging transparency, mitigating bias, and promoting principles of ethics. No matter how lawmakers proceed in legislating AI, one thing is clear: these current approaches are vital, but they are only half measures. They will not fully protect us. To bring AI within the rule of law, lawmakers must go beyond them to create broad duties and specific rules that ensure that AI systems and the actors that deploy them are worthy of our trust.
Drawing from research on AI policy that I have conducted with Neil Richards, Ryan Durrie, and Jordan Francis at the Cordell Institute at Washington University in St. Louis, I recently testified before the US Senate Committee on the Judiciary on the importance of substantive legal protections when it comes to AI.
Those developing AI have staggering fortunes at their disposal and the power to deploy systems that exploit our data, labor, and precarity. Allowing them to dilute current laws or self-regulate are not sufficient substitutes for strong rules.
Let’s consider three popular AI half measures and why lawmakers must do more.
First, transparency is a popular proposed solution for opaque AI systems. But it does not produce accountability on its own. Even if we could truly understand the various parts of AI systems, lawmakers must intervene when these tools are harmful and abusive.
A second laudable but insufficient approach is when companies work to mitigate bias. AI systems are notoriously biased along lines of race, class, gender, and ability. While mitigating bias in AI systems is critical, self-regulatory efforts to make AI “fair” are doomed to fail. It’s easy to say that AI systems should not be biased; it’s very difficult to find consensus on what that means and how to get there. Additionally, it’s a mistake to assume that if a system is fair, it’s safe for all people. Biased systems are just a symptom of power being used to marginalize people. Even if we ensure that AI systems work equally well for all communities, all we will have done is create a more effective tool that the powerful can use against us to dominate, manipulate, and discriminate.
A third AI half measure is committing to ethical principles. Ethics are important and these principles can sound impressive, but they are a poor substitute for laws. It’s easy to publicly commit to ethics, but industry doesn’t have the incentive to leave money on the table for the good of society.

So, what can lawmakers do?
First, they must accept that AI systems are not neutral and must regulate how they are designed. People often argue that lawmakers should avoid design rules for tech because, as the saying goes, “there are no bad AI systems, only bad AI system users.” This view of technologies is wrong. There is no such thing as a neutral technology, including AI systems. Facial recognition technologies expose us. Generative AI systems replace labor. Lawmakers should borrow from established policies used to protect the public from erroneous and harmful business practices, like holding companies accountable for defective products or providing the tools for unfair or deceptive conduct.
Next, lawmakers should focus on substantive limitations that curtail abuses of power. AI systems are so complex and powerful that it can seem like trying to regulate magic. But the broader risks and benefits of AI systems are not so new. AI systems bestow power. This power is used in all sorts of ways to benefit some and harm others. Lawmakers should borrow from established legal approaches to remedying power imbalances. This includes requiring broad, nonnegotiable duties of loyalty, care, and confidentiality, as well as implementing robust, bright-line rules that limit exploitative data practices in AI systems.
Finally, and most importantly, lawmakers must resist the idea that AI is inevitable. When lawmakers go straight to putting up guardrails, they fail to ask the existential question about whether particular AI systems should exist at all. This dooms us to half measures. Strong rules should include prohibitions on unacceptable AI practices, like emotion recognition, biometric surveillance in public spaces, predictive policing, and social scoring.
To avoid the mistakes of the past, lawmakers must make the hard calls. Trust and accountability can only exist where the law provides meaningful protections for humans. And AI half measures will certainly not be enough.
Woodrow Hartzog is a Boston University School of Law professor of law who is internationally recognized for his work in privacy and technology law. He is the author of Privacy’s Blueprint: The Battle to Control the Design of New Technologies (Harvard University Press, 2018) and the coauthor of Breached! Why Data Security Law Fails and How to Improve It (Oxford University Press, 2022).
Join Hartzog on October 25 for a Boston University Research on Tap event that will explore responsible AI, privacy, fairness, and accountability. Register here.
“Expert Take” is a research-led opinion page that provides commentaries from BU researchers on a variety of issues—local, national, or international—related to their work. Anyone interested in submitting a piece should contact thebrink@bu.edu. The Brink reserves the right to reject or edit submissions. The views expressed are solely those of the author and are not intended to represent the views of Boston University.
This Series
Also in
Expert Take
-
October 4, 2024
A Double Threat to Reproductive Health: Environmental Contaminants and a Lack of Access to Healthcare
-
August 6, 2024
“I Love This Work, but It’s Killing Me”: The Unique Toll of Being a Spiritual Leader Today
-
June 17, 2024
Being Open about LGBTQ+ Identities in the Classroom Creates Positive Learning Environments
Comments & Discussion
Boston University moderates comments to facilitate an informed, substantive, civil conversation. Abusive, profane, self-promotional, misleading, incoherent or off-topic comments will be rejected. Moderators are staffed during regular business hours (EST) and can only accept comments written in English. Statistics or facts must include a citation or a link to the citation.