Deep Learning

Across the legal terrain, UC Berkeley Law’s scholars, students, and programs are at the vanguard on AI.
By Gwyneth K. Shaw
Illustrations by Ryan Olbrysh
L

ike a slew of innovations that preceded it — from the telegraph to nanotechnology — artificial intelligence is both changing a wide swath of our landscape and raising an equally broad set of concerns.

At UC Berkeley Law, a Silicon Valley neighbor long renowned for its top technology law programs, faculty, students, research centers, and executive and Continuing Legal Education platforms are meeting the challenges head on. From different corners of the legal and policy world, they’re positioned to understand and explain the latest AI offerings and highlight places where guardrails are needed — and where a hands-off approach would be smarter.

A collage featuring elements of technology, law, and security, including Lady Justice, a robotic hand, a human face, and digital symbols.
This summer, the school will begin offering a Master of Laws (LL.M.) degree with an AI focus — the first of its kind at an American law school. The AI Law and Regulation certificate is open for application from LL.M. students interested in the executive track program, which is completed over two summers or through remote study combined with one summer on campus.

“At Berkeley Law, we are committed to leading the way in legal education by anticipating the future needs of our profession. Our AI-focused degree program is a testament to our dedication to preparing our students for the challenges and opportunities presented by emerging technologies,” Dean Erwin Chemerinsky says. “This program underscores our commitment to innovation and excellence, ensuring our graduates are at the forefront of the legal landscape.”

The certificate is just one of several ways practitioners can add AI understanding to their professional toolkit. Berkeley Law’s Executive Education program offers an annual AI Institute that takes participants from the basics of the technology to the regulator big picture as well as Generative AI for the Legal Profession, a self-paced course that opened registration for its second cohort in February.

At the Berkeley Center for Law & Technology (BCLT) — long the epicenter of the school’s tech program — the AI, Platforms, and Society Center aims to build community among practitioners while supporting research and training. A partnership with CITRIS Policy Lab at the Center for Information Technology Research in the Interest of Society and the Banatao Institute (CITRIS), which draws from expertise on the UC campuses at Berkeley, Davis, Merced, and Santa Cruz, the BCLT program works with UC Berkeley’s Goldman School of Public Policy, School of Information, and College of Engineering.

The center also hosts AI-related events, which are available on its innovative B-CLE platform (see “B-CLE’S A-Plus Program”). So does the Berkeley Center for Law and Business, through webinars and in-person talks with expert corporate and startup leaders.

A collage features a woman with glasses, the Lady Justice statue, and vintage computers, set against a purple and yellow background.
DUE DILIGENCE: Professor Jennifer M. Urban ’00, director of policy initiatives at the Samuelson Law, Technology & Public Policy Clinic, urges resisting shortcuts in developing AI guidelines.

Drilling down

Many of BCLT’s 20 faculty co-directors have AI issues on their scholarship agenda, including Kenneth A. Bamberger, Colleen V. Chien ’02, Sonia Katyal, Deirdre Mulligan, Tejas N. Narechania, Brandie Nonnecke, Andrea Roth, Pamela Samuelson, Catherine Crump, Jennifer M. Urban ’00, Erik Stallman ’03, and Rebecca Wexler. Their expertise spans the full spectrum of AI-adjacent questions, including privacy concerns, intellectual property and competition issues, and the implications for the criminal justice system.

“The most important potential benefit of AI is harnessing the power of the vast amount of information we have about our world, in ways that we’ve been stymied from doing because of scale and complexity, to answer questions and generate ideas,” says Urban, the director of policy initiatives at the school’s Samuelson Law, Technology & Public Policy Clinic.

“The potential drawbacks stem from the risk that we fail to develop and deploy AI technology in service of those benefits, but instead take shortcuts that destroy public trust and cause unnecessary damage. A big use of automation already, for example, is automated decision-making,” she explains. “Unfortunately, some automated systems have been prone to bias and mistakes — even when making critically important decisions like benefits allocations, healthcare coverage, and employment decisions.

“Ultimately, any decision-making system requires trust. We risk creating — or not interrupting — incentives that nudge the development of the technology away from publicly beneficial uses and toward untrustworthy results.”

The clinic also affords students the opportunity to get hands-on experience in developing and influencing AI policy (see below), says Stallman, the assistant director.

“Artificial intelligence is raising new questions or resurrecting old ones in every field of law and technology,” he says. “In the Samuelson Clinic, students are weighing in on those questions and also confronting how AI is influencing the way we practice law.”

One of the more pressing issues at the moment, Samuelson says, is even agreeing on precisely what constitutes AI. A pioneer in digital copyright law, intellectual property, cyberlaw, and information policy, she spoke to Europe’s Digital Agenda Conference at the University of Bergen last fall about the United States’ perspective on the current landscape.

“If you have a technology that’s well-defined, and everybody knows what it is and what it isn’t, that’s one thing,” she says. “But nobody has a good definition for what artificial intelligence is, and today, the hype around that term means that people are calling pretty much every software system AI, and it’s just not.”

A man in a suit stands against a digital-themed background featuring a circuit board and server room.
VALUED VOICE: Professor Tejas N. Narechania’s work on AI and machine learning led to an invitation to the White House to comment on then-President Joe Biden’s policies.
Some companies are genuinely building and refining large language models and other neural networks that could profoundly change the creative sector and reorient a host of business models. Others are just jumping on a next-new-thing bandwagon that only recently held out non-fungible tokens (NFTs) and cryptocurrency and blockchain as world-altering innovations, Samuelson says.

“Not so long ago, there was this sense that they were going to sweep away everything and we need a whole new body of regulations for them. But no, they turned out to be kind of marginal phenomena,” she adds. “I don’t think that AI systems are marginal phenomena, but I don’t think they’re one thing, either.”

In her European presentation, Samuelson said copyright law is the only U.S. law on the books that could “bring AI to its knees.” Multiple cases are pending in courts across the country from heavy hitters in the creative world, including The New York Times and Getty Images, alleging that scraping those companies’ original works to train generative models that then create new visual and text-based works violates the copyright laws.

AI companies often argue that their tactics constitute a “fair use” under the current federal copyright laws — a question that’s been well-litigated in cases involving music samples and online books, to give just two examples. But Samuelson says a sweeping judgment about AI seems unlikely.

Narechania, whose work on AI and machine learning led to an invitation to the White House to comment on then-President Joe Biden’s policies, says the competition angle raises other big questions.

“If you look at the companies playing in this space, there are fewer and fewer of them, they tend to be more concentrated, they tend to be overlapping. And that has implications for both competition and innovation,” he says. “AI appears to us as a magic technology. You go to ChatGPT in your browser, type something in, get a response, it’s fun. But once you peek under the hood and look at what the technology stack looks like underneath it, you see a funnel that narrows pretty quickly. There are lots and lots of applications, but a bunch of them are all sitting on top of GPT — that is, there is only one model of language.

“And that funnel, that lack of competition below the application layer, has problems. What is the quality going to be of these models, to the extent we’re worried about bias or discrimination or risk? What’s the data that are input into these models? Who’s getting it? Where is it coming from?”

Multiple providers could help improve these systems through market competition, Narechania adds. If that’s not possible, regulations might be necessary to ensure the public gets a real benefit out of the technology.

Berkeley Law Voices Carry

Listen to Professors Tejas N. Narechania and Rebecca Wexler talk about the challenges of AI on the “Berkeley Law Voices Carry” podcast here.

Eyes everywhere

Just 20 years ago, the notion of catching a criminal suspect using publicly-mounted cameras and facial recognition technology felt like an outlandish plot point of a “24” episode. These days, with almost ubiquitous surveillance in many urban areas and rapidly developing capabilities, it’s a key advantage of AI.

But serious questions remain about the accuracy of information officials are using to arrest and convict people — and those concerns go far beyond cameras. Large language models trained on a limited diet of text, images, or characteristics could reproduce bias, for example, or spit out a result that’s not fully grounded in the factual evidence.

Wexler, who studies data, technology, and secrecy in the criminal justice system, says AI raises genuine concerns. But many of them are related to a broader lack of transparency about tech-aided evidence, she explains, or even expert testimony from human beings.

“AI is, in a way, an opportunity, and it’s shining a spotlight on these issues that are relevant to AI but not necessarily unique to AI,” she says.

Wexler has written about how some software vendors use contract law protections to avoid peer review of their applications. That means police, prosecutors, judges, and juries are relying on results from devices and programs that haven’t been independently vetted to see if they’re returning accurate results.

AI is increasing that reliance, she says, and what Wexler calls its “shiny mystery” might sway jurors to defer the kind of skepticism they might have for a human expert. So when a police officer gets on the witness stand and describes the result they got from a device, they’re recounting the button they pushed — not the way the machine produced evidence.

Roth, who’s been writing about the evidentiary issues raised by machines for nearly a decade, points out that courts have ruled there’s no Sixth Amendment right to cross-examine a software developer.

In a recent webinar about new technologies and tools in the criminal system, she told the audience that few rules govern the assertions of automated systems, and offered advice for lawyers in thinking about how to use those outputs.

A woman speaks into a microphone with a stack of books and digital circuit pattern in the background.
COMMON GROUND: Professor Pamela Samuelson, who’s been studying the boundaries of innovation for decades, says agreeing on precisely what constitutes AI is a pressing issue.
“If you want the source code, you’re going to have to explain why you need it, and you may need to talk to an expert about that,” Roth said. “If this program could talk, and you could submit them to a deposition, what would you ask the program about their assumptions, or the hypotheticals that they considered?”

Other scholars are exploring whether AI can be harnessed to improve the legal system. Chien, whose Law and Governance of Artificial Intelligence course will be the backbone of the LL.M. certificate program, has a forthcoming article that proposes using ChatGPT-style applications to improve access to the court system for low-income people.

With evictions, record expungement, and immigration, for example, a chatbot might help those who have difficulty finding or affording an attorney get the right legal advice or match up with a pro bono practitioner, she and her co-authors write. She also co-authored the first field study of legal aid attorneys using AI to improve service delivery.

“Generative AI technologies hold great potential for addressing systematic inequalities like the justice gap, but fulfilling this potential won’t happen organically,” Chien says. “More attention to the potential benefits, like reducing the cost of legal services for the underserved and not just the harms of AI, could have big payoffs.”

The safety risks are what grab most people’s attention, Samuelson adds, particularly lawmakers. More than 700 bills seeking to rein in AI are floating around state legislatures, including in California.

“The AI systems are powerful, and they’re often not explainable, and they make predictions, and they yield other kinds of outputs that will affect people’s lives,” she says. “There’s a lot of worry about discrimination, misinformation, and privacy violations.”

But the European Union’s generally proactive model of regulation may create a two-tiered system that stifles access to innovation and works against the very goal it’s reaching for. It’s probably a misguided notion to think that big U.S. tech companies like Apple and Meta will agree to comply with new rules from Brussels, leaving smaller companies and European customers out in the cold.

Various forms of AI carry different risk profiles, Samuelson says, and applications for aviation, hospital record-keeping, and job recruiting shouldn’t get the same regulatory treatment. A nudge might be more effective than a hard standard, she argues.

“The people who are developing these systems are not trying to deploy them to destroy us all. They think they have some beneficial uses,” Samuelson says. “Then the question is, how do you balance the benefits of advanced technologies against the harms that they might do? And I think rather than mandating that everyone has to have a kill switch, we could do something more targeted.”

Three men standing together; the center holds a plaque.
PRIZED GUEST: Berkeley Center for Law & Technology Executive Director Wayne Stacy (center) with officials at Kathmandu University, where he spoke during his 30-day stay in Nepal. Photo by Padma Rijal

Peak Performer

Berkeley Center for Law & Technology Executive Director Wayne Stacy has been immersed in the development of American tech law for decades, aiding and monitoring the evolution of everything from patent rules to artificial intelligence.

As a top patent litigator and leader of the U.S. Patent and Trademark Office’s West Coast branch, in the classroom, and most recently leading UC Berkeley Law’s technology hub, Stacy’s been positioned at the intersection of innovation and this country’s legal and governmental principles.

But what if a country didn’t have that history to rely on — not just regarding longstanding principles guaranteeing freedom of speech and assembly, but also questions about where a resource-limited nation should invest as it tries to keep up with innovation’s breakneck pace?

Stacy got to find out in December, when he visited Tribhuvan University in Kathmandu, Nepal, as a Fulbright Specialist, part of a U.S. Department of State program. Over just 30 days, he helped the law faculty at Tribhuvan — which oversees Nepal’s public legal-education system — build out a brand-new tech law curriculum for all of the country’s public law schools, including the burgeoning AI sector.

While the workload was heavy — Nepal has six-day work weeks — Stacy did see some of the country. He visited Mount Everest’s base camp and, having grown up farming himself, toured the countryside to see how agriculture works there and “get a feel for where technology is still irrelevant.”

He calls the experience fascinating for multiple reasons: The chance to observe another culture closely and understand its internal and external political pressures; to discuss which legal topics best fit their current and future needs; and to reflect on how countries that are just developing technology sectors should use rules and regulations formed in places, like the European Union and the U.S., that have worked on the topics for many years.

Master’s of Law (LL.M.) students in Nepal — wedged between twin technology powerhouses China and India — needed a framework for understanding and developing tech law. Tribhuvan was the linchpin for the curriculum development since the government, universities, and judiciary are stocked with its graduates, Stacy explains.

“It came down to not telling them how to do things, but making it clear that this is how the world has designed these programs, from privacy to AI,” he says. “A lot of countries are developing these laws from scratch because they’re just now facing these tech issues, and just adopting regulatory approaches wholesale from other nations is not always going to work.”

Because the curriculum will be used for many years, he adds, there’s room for growth and change in the class structure as the technology sector evolves. Stacy plans to stay involved and hopes to bring some of the ideas and comparisons generated during the process back home.

“They’re facing many problems that we first faced 10 to 20 years ago,” he says. “Now they can look at what the rest of the world has done and is doing,” he says. — Gwyneth K. Shaw

Portrait of a woman against a geometric and abstract background.
TROUBLESHOOTING: Professor Colleen V. Chien ’02 is bridging the worlds of law and computer science to address various problems with access to the justice system for lower-income people.
Man in a suit with technological and digital imagery in the background.
DOOR OPENER: Samuelson Law, Technology & Public Policy Clinic Assistant Director and Clinical Professor Erik Stallman ’03 helps students gain experience in working to shape AI policy.

A front-row seat to innovation

In addition to the new LL.M. program, UC Berkeley Law students interested in AI have a wide variety of options — at the law school and across campus.

2L Juliette Draper worked with Chien on a project about proposed reforms to California’s driver’s license suspension policies and used ChatGPT to summarize the various bills in play. The program dramatically increased her efficiency, she says, quickly producing 30 paragraphs about the legislation.

“I think it’s incredibly exciting to be in this period of time where artificial intelligence, technology, energy law, and policy at large is really important, because AI and technology intersect with everything that we do,” Draper says. “It’s cool to think how if you’re interested in any topic — from reproductive rights and healthcare to immigration reform — AI presents a lot of unique challenges, but it also offers unique tools that can help us in those areas.”

Pranav Ramakrishnan LL.M. ’25 worked with Chien on a project focused on AI governance, examining how LL.M.s evaluate resumes in relation to criminal history and race and exploring algorithmic bias in hiring.

To gain a broader perspective, he took a fall semester course at UC Berkeley’s Haas School of Business with Matthew Rappaport, general partner of the venture capital fund Future Frontier Capital and co-founder of the school’s Deep Tech Innovation Lab. The unique cross-disciplinary course put law, business, and engineering students together to absorb the full spectrum of what tech-related businesses need.

“I was fascinated by this, because I got to interact with great engineers and great business minds,” he says. “The engineers got to learn a little bit of law, and I got to explore business and engineering more deeply.”

3L Bani Sapra worked with ACLU California Action as a Samuelson Clinic student on a comment aimed at Gov. Gavin Newsom’s 2023 executive order on generative AI. She says that while the tech sector is bullish on the many potential applications and benefits, it’s still new and there is much more to develop. Ultimately, AI may reshape the practice of law, she adds.

“In the same way that Google changed the way we research, AI has the potential to change the way we do discovery and
trials, and we can already see law firms exploring how to adopt those methods,” Sapra says. “But my work at the clinic really taught me that these sorts of innovations need to be embraced with great caution, and you need to constantly check to make sure it’s creating the outcomes we’re expecting.”