Who Decides What AI Decides

What authority do these algorithms have to make decisions on our behalf? Who gets to set the rules for our digital world? LLS professor Andrew Keane Woods has built a formidable body of scholarship around these crucial questions.

By Diane Krieger

Watch Professor Woods' Featured Insights video

James Madison worried about “private sovereigns,” individuals beyond the reach of democratic accountability who impose their will on the people.

Andrew Keane Woods thinks Madison, were he alive today, would recognize and recoil from our current digital moment.

“It feels like we’ve embraced a kind of digital feudalism,” says Woods, a professor of law and William M. Rains Fellow at LMU Loyola Law School (LLS). “We leave important questions about speech, privacy, and discrimination to the CEOs of our leading tech companies.”

A contracts scholar, political theorist, and expert on the regulation of digital platforms, Woods has spent the last 20 years interrogating who should govern the internet and by what right.

Woods is not anti-technology; he takes a Waymo to school every week. But since we get our news, conduct our politics, build our careers, and even find our romantic partners on the same handful of privately owned platforms, the question of who governs our presence on them carries tremendous weight. While many contract scholars question the validity of “clickwrap” – the boilerplate agreements many of us click without thinking – Woods is more interested in a different question: “Forget whether anyone ever agreed or not; do we actually want these contracts? Are they good for us as a society?”

Woods thinks the rules governing what gets seen, boosted, suppressed, or monetized are too consequential to be dictated by a handful of private companies answering only to shareholders. The same goes for algorithms that increasingly shape our health decisions, job opportunities, legal services, and financial credit.

“At some level, I think we are talking about all-of-society rulemaking, and the only legitimate mechanism for making society-wide rules is the democratic process,” he asserts.

*

Woods teaches "Contracts," "Privacy Law," and the innovative "Law and Technology Lab" seminar at Loyola.

He and his partner, Albertina Antognini, are both new to the LLS faculty. She specializes in family law and is a leading expert on non-marital relationships. Their move to LLS was driven partly by intellectual conviction — Loyola’s culture of scholarly seriousness and unabashed social justice mission strongly appealed to them. Los Angeles was also a draw. Woods pedals his young daughters, 4 and 2, to pre-school on a cargo bike, and he is embarking on an eating tour of every taco truck in Southern California.

Woods studied political science at Brown University before earning his J.D. at Harvard. He then got a doctorate in politics from Cambridge University in 2012. His dissertation questioned the wisdom of using large international criminal tribunals to try to solve complex social and political problems like civil war and genocide.

Early in his career, Woods was a frequent flyer on the human rights conference circuit. From India to Brazil, he heard local lawyers and judges gripe about “these American tech companies.” (Woods felt somewhat abashed; he was often the only American in the room.)

Here’s how the complaint went: Big Tech companies open local offices, build massive user bases, rake in billions of dollars, and then refuse to comply with local laws.

His 2016 Stanford Law Review article, “Against Data Exceptionalism,” staked out Woods’ foundational position: that the internet is not magic, and the companies that run it are not above the law. When Bank of America opens a branch in Dubai, it must follow local rules. The same goes for Costco and Chevron. Why should Google, Facebook, or TikTok be any different?

That question seeded a growing body of literature by Woods, including 20 journal articles and book chapters, and more than 40 essays in Lawfare, an online magazine focused on law and national security policy.

Tech companies, Woods believes, market themselves as uniquely fragile innovators requiring special care-and-feeding. But digital data is tied to physical server locations and can be regulated through established legal frameworks. There’s no need to create new “cyberlaw.” To resolve cross-border data disputes, simply apply longstanding conflict-of-laws principles.

*

It’s impossible, Woods says, to write about internet regulation without running into the First Amendment. That’s because the Supreme Court spent half a century interpreting it in ways that have handed corporations, and potentially now their algorithms, formidable free-speech protections.

“The First Amendment is a massive barrier to regulation,” Woods says. Common-sense protocols preventing online harm to children or addressing serious societal problems, like deepfakes and social media addiction, routinely face First Amendment challenges. Even potentially lethal content — for example, ghost gun files — is difficult to regulate.

Woods wrestles with the conundrum in “From Gods to Google,” an article in The Yale Law Journal co-authored with Professors Rebecca Aviel, Margot Kaminski, and Toni Massaro. The scholars trace how the Supreme Court’s First Amendment jurisprudence granting speech protections to religious objectors can and does migrate into broader corporate speech-rights.

Take two recent Supreme Court decisions. In 303 Creative v. Elenis, the court sided with a Colorado web designer who refused to build wedding sites for same-sex couples on religious grounds, despite a Colorado public accommodations law that prohibited discrimination on the basis of sexual orientation. And in Moody v. NetChoice, the court made clear that social media companies’ content moderation choices are protected expression under the First Amendment.

“You might think one has nothing to do with the other,” says Woods. “But at oral argument in NetChoice, Paul Clement, arguing on behalf of the technology industry, invoked 303 Creative in his opening remarks.”

Because the Colorado case was decided on free-speech and not free-exercise grounds, “technology companies immediately seized upon the case to challenge state regulation of internet platforms,” Woods writes.

Woods and his co-authors draw a throughline from religious speaker cases (like 303 Creative) to internet regulation cases (like NetChoice) and beyond. The implications of the argument made by Google’s lawyers are even greater in a world of artificial intelligence speech: “Their argument was ‘if the law protects web designers, then it has to protect our editorial choices, which by the way are implemented by algorithms.’” And it’s only a small leap to go from there to granting speech rights to AI. “I think that’s crazy,” Woods says.

*

Woods’ forthcoming law review article, “A Deference Regime for Machines,” examines AI jurisprudence from a different angle. Courts are increasingly being asked to evaluate algorithmic decision-making. In Arkansas, a state welfare agency replaced its human caseworkers with an algorithm, and as a result lots of people saw a reduction in health care benefits. The impacted parties sued, asking the court to override the algorithmic output.

“The First Amendment is a massive barrier to regulation.” 

This problem occurs in other areas, too. In criminal trials, judges routinely use algorithmic risk scores to assess a felon’s likelihood of reoffending. In some school districts, teachers are being evaluated by privately developed algorithms. How should courts evaluate these algorithms? In one sense, this is a completely new problem. But Woods suggests it might also be a familiar one. “For centuries, courts have developed doctrines that calibrate their deference to another decisionmaker across a wide range of legal domains — from administrative law, corporate law, and contract law.” Just as he saw the cloud as merely the latest in a long line of conflict-of-laws debates, Woods sees judicial treatment of algorithms as merely the latest in a long line of deference debates.

*

Meanwhile, back in the classroom, Woods is rolling out his signature "Law and Technology Lab" at LLS for the first time. He developed the “experiential learning” elective over a decade ago, and he has taught the student-led seminar many times since then. 

In the Law and Tech Lab, once Woods has covered the basic parameters of tech law, his students take the lead. Each gives an expert briefing backed by a PowerPoint deck to explain a particular issue in the field. The whole class then participates in a related simulation. It could be a Senate judiciary committee hearing or a jury deliberation. Past classes have grappled with parental rights to curate their embryo’s DNA through CRISPR gene editing. A memorable simulation had students role-playing jurors in a murder trial, but with an evidentiary tech-twist: To reach a verdict on the defendant’s culpability in his wife’s death, the jury had to rely exclusively on data logs from the couple’s smart home. (They found not guilty based on the refrigerator thermostat history.) In another class, students created a “Jurassic Park”-inspired simulation in which classmates had to decide whether a de-extincted creature could qualify for endangered species protections.

*

The many threads of Woods’ teaching and scholarship are coming together in a book. It’s a sustained inquiry into what Woods calls the two most consequential questions on the tech-law horizon: When should the state, rather than the market, be setting the rules? And when should machines make decisions instead of humans?

To answer the latter, Woods has paid close attention to the harms and benefits of algorithms and AI. A few years ago, he coined the term “robophobia” to describe an irrational fear of self-driving cars, medical AI, eDiscovery and even autonomous weapons — all technologies that have been objectively proven safer and more reliable than humans.

“Cars kill 40,000 Americans every year, and a million people worldwide,” Woods says. “Most of them are driven by humans. Yet in San Francisco, when Waymo killed a cat, there was a serious effort to shut it down.”

At the same time, people put blind trust in algorithms to manage their calendars, shape their media diets, even select their life partners by curating which dating profiles they see.

“We routinely prefer worse-performing humans over better-performing robots. Our bias against robots is costly, and it will only get more so as robots become more capable,” he predicts in the first of several articles he’s written on the subject.

Woods believes instead of using our gut instincts to sort the good bots from the bad, we should be making carefully considered choices based on hard facts.

What distinguished Woods in the culture battles over emerging tech is his refusal to pick a side. He wants the state to reassert democratic authority over digital platforms. And he also wants faster adoption of AI in medicine, transportation, and legal services.

Neither optimist nor pessimist, he prefers to call himself a “techno-realist.”

“I’m excited about the possibilities and worried about the downsides,” he says. “I want flying cars, but I also want sensible regulation. I want us to be rational about both the market and the state.”

Diane Krieger, a frequent contributor to LMU Magazine, is a Los Angeles-based freelance writer whose work has appeared in publications at USC, Tufts University, Johns Hopkins University, Caltech and The Idaho Statesman, where she was the resident philharmonic and theater critic.