Training Lawyers for an AI World

At LMU Loyola Law School, students are learning to wield AI, challenge it, and shape the law that will govern it.

By Robin Post

From left: Emilia Zielinski '28 and Aylin Ramirez '28 are learning the potentials and pitfalls of AI as lawyers-in-training.

The case study sounded like a law school hypothetical. It wasn’t. Shortly after midterms, students Sofia Tourgeman ’26 and Kayla Long ’27 led their classmates through one of the most urgent legal problems in technology today: deepfake pornography. The victims are real. The harms are devastating. And the law, as their analysis made clear, is not keeping up.

Existing legal frameworks, Long and Tourgeman argued, were built for a different world — one where misconduct involves tangible actions, identifiable actors, and a single verifiable event. Deepfakes break all three assumptions. The content is synthetic; the perpetrators are often anonymous; and once an image is distributed online, it replicates beyond any individual’s control.

“Common law privacy torts were designed to address misconduct of tangible actions, not synthetic media,” Long said. “They address a real underlying event, so it’s challenging to apply to [artificial intelligence] content, anonymous users, and rapid online distribution.”

But the conversation didn’t stop at diagnosis. Students pushed into the harder question: what should the law look like? They debated how new legislation might better protect victims in a landscape that shifts faster than any statute can follow — examining recent federal proposals and asking whether existing frameworks could be adapted or whether something fundamentally new is required.

These are the kinds of discussions happening in classrooms across LMU Loyola Law School. Led by faculty at the forefront of AI and technology governance, Loyola Law students are doing more than studying how AI is changing the legal profession. They’re preparing to decide the rules.

Bailee Hudson '28 (third from left) in Professor Andrew Woods' contracts class.

The Tool and Its Limits

Every lawyer entering practice today will use AI. The question is whether they’ll use it well. At Loyola Law, faculty are making sure students confront that question early and often — not by banning AI tools from the classroom, but by forcing students to stress-test them.

In legal writing courses, Professor Vince Farhat introduces AI in carefully calibrated stages: first restricted entirely, then permitted for refinement, and finally evaluated as one instrument among many. The point is not to produce students who can prompt an AI effectively. It’s to produce lawyers who know when to trust a tool and when to override it.

In “Civil Procedure,” Professor Rebecca Delfino takes a more confrontational approach, asking students to pit their own legal analysis directly against AI outputs. The exercise is revealing.

“Is it citing the right cases? Is it applying the law correctly?” said first-year Aylin Ramirez ’28. “It makes us more attuned to what AI might miss.”

Delfino has also guided students toward more reliable AI tools — platforms designed for legal research that draw on verified case law rather than general-purpose language models. The distinction matters. And students are learning it firsthand.

First-year Bailee Hudson ’28 ran her own experiment. She used ChatGPT to summarize a case she had already briefed, curious to see how the tool would perform on material she knew cold.

The results were sobering.

“I just wanted certain points clarified, and it had gotten a case wrong. It had gotten all the facts wrong,” Hudson said.

Now, like many of her peers, Hudson uses AI cautiously — and verifies everything. The lesson was not that AI is useless. It’s that competence requires knowing where the tool ends and professional judgment begins.

That instinct will matter. As the first generation of lawyers to train entirely in the age of AI, these students are internalizing a discipline their predecessors never needed: the habit of questioning outputs that arrive with the false confidence of certainty. The risks are not abstract. Fabricated citations have already led to sanctions against practicing attorneys. Confidential client information can be exposed through careless use of AI platforms.

“When you’re in front of a court, and the court asks you: ‘Is this your writing?’ and you have to tell them, ‘No,’ it can get you to really troublesome issues,” Ramirez said. “Everything just goes back to ethics and making sure that the client is paying for someone who’s actually willing to do the work.”

From left: Noor Hasan '28 and Bailee Hudson '28 have engaged with questions of ethics, human rights and privacy in their exploration of AI and the law.

Beyond the Toolbar

Learning to use AI responsibly is only half the equation. At Loyola Law, students are also grappling with the broader legal and constitutional questions AI is raising, questions that will define entire fields of practice for decades to come.

In Professor Andrew Keane Woods’ “Contracts” class, the entry point is deceptively mundane: the terms and conditions that users accept every day without reading. Woods has students pull apart these agreements line by line. What they find is startling.

First-year Emilia Zielinski ’28 and her classmates discovered that the boilerplate language most people click past actually authorizes the collection, use, and storage of personal data in ways that extend far beyond showing “relevant” content.

“This open-ended ‘use’ now also includes using the data to train generative AI engines, which will cause many issues down the road as we navigate the privacy and IP ownership implications of this use,” Zielinski said.

The stakes could hardly be higher. The contracts governing AI’s access to personal data are being drafted now — often by the very companies with the most to gain from expansive terms. The lawyers who understand these instruments, and who can challenge or reshape them, will have outsized influence on what AI is permitted to do.

In Woods’ “Law and Technology” course, the frame widens further. Second-year Benjamin Azran ’27 sees the emerging landscape clearly: AI tools will make research, document review, and drafting faster, freeing lawyers to focus on strategy and judgment. But those same technologies are simultaneously generating an entirely new body of legal disputes — around privacy, intellectual property, and the scope of regulation — that will require lawyers who understand both the technology and the law it implicates.

First-year Noor Hasan ’28 puts it in starker terms. In Woods’ “Contracts” class, what began as doctrinal analysis kept opening onto larger questions — about civil rights, the First Amendment, and the power of corporations to harvest and exploit personal information at scale.

“Being in contracts has helped fill in so many gaps — gaps I think our legislature has in terms of technology and AI and what that means for privacy, and what it means for your basic human rights.”

“It’s been truly an incredible experience because I think it has formed this very robust conversation where civil rights will come up, your First Amendment rights, and what it means for companies and corporations to have access to your information digitally,” Hasan said. “Being in contracts has helped fill in so many gaps — gaps I think our legislature has in terms of technology and AI and what that means for privacy, and what it means for your basic human rights.”

Hasan acknowledged that some of her classmates have felt overwhelmed by the scale of the problem — and by how deeply entangled legislators and the technology industry have become. That skepticism is reasonable, and Woods doesn’t dismiss it. But he pushes students to recognize what lawyers are uniquely positioned to do: work through both the legislature and the courts to set the boundaries that democratic governance requires.

Today’s law students will be the ones drafting AI legislation, litigating its boundaries, and representing clients on every side of these disputes. At Loyola Law, they’re not waiting for the future to arrive. They’re already doing the work.

Robin Post, associate director of marketing and communication and LLS, is a higher education communicator and writer. Her work has been published by USC, UCLA, Stanford, the Palo Alto Weekly, LA Weekly, and LA STAGE Times. She received her master's in arts journalism from USC.