New Frontiers in Legal Education
Dean Onwuachi-Willig considers legal education in the age of artificial intelligence.

Photo by Christopher Burns via Unsplash
New Frontiers in Legal Education
Dean Onwuachi-Willig considers legal education in the age of artificial intelligence.
When ChatGPT, a sophisticated artificial intelligence (AI) chatbot, reached an estimated 100 million users just two months after its launch in November 2022, the world—and especially the world of higher education—took notice. ChatGPT and other large language models have the potential to inspire innovation and efficiency and to change the current practices and approaches of countless professions.

The rise of AI makes this moment a fascinating time to be an educator. As faculty and administrators, we have an obligation to prepare students for a professional and social world with jobs and opportunities that very few of us can even imagine. These uncertainties spawned by AI have driven legal educators across the nation to think differently about how we teach and, specifically, about how we might incorporate AI tools in both our classroom and clinical teaching.
The challenges presented by AI, however, are not new. For years, AI has been used to aid legal research and discovery. As AI tools have become more advanced, law firms have begun to use them to analyze and summarize documents, draft contracts and deposition questions, and more.
Still, using AI tools in legal practice can present real dangers. The tools are far from perfect. Among these flaws are the biases that get built into AI through the humans who construct such technologies. Just as students must learn to be critical consumers of legal doctrine, they must also learn how to be questioning and critical of AI. Other dangers presented by AI include the fabricated responses, such as fictitious cases and citations, that tools like ChatGPT have produced. The reports of attorneys who used ChatGPT to write briefs that cited made-up cases serve as cautionary tales for law students and practicing lawyers alike, and not just in terms of getting caught and facing disciplinary measures within the profession, but also in terms of the impact on their clients—the people who have come to them for help and who rely on their expertise and training.
These warnings about AI’s perils have served as motivation and inspiration for our faculty, who are thinking deeply about how to engage with the tools, including how to constrain their use, when needed, to prepare BU Law students for the profession of today and tomorrow. After all, future generations of lawyers will need to understand AI—its capabilities and its flaws—to advise their clients and use these tools responsibly and ethically.
Throughout this past academic year, BU Law faculty, like many across the country, began to engage with AI tools. In so doing, some have found creative ways to incorporate them into their pedagogy, while others have approached AI with skepticism. An overarching concern for educators is that our students may begin to rely on these tools in their assignments. As legal educators, we worry even more about overreliance on AI tools because of the skills our students must learn to obtain a license to practice law. Specifically, our students need to take and pass the bar examination, which requires the production of written essays without the assistance of AI. Doing so requires the development of foundational lawyering skills like strong writing, critical thinking and analysis, and good judgment—all of which will help them in their careers as attorneys, too. These skills are hard to teach, particularly in a society challenged (as well as advanced) by AI, but our faculty remain committed to ensuring our graduates have such necessary foundational skills.
The way we approach teaching the law in the post-AI world also requires continued engagement in interdisciplinary collaborations. BU Law has long been at the forefront of these efforts.
The AI/Tech and Education Committee—led last spring by Professor Katharine Silbaugh and this fall by Christopher Conley, director of the Privacy, Security & Health Practice Group in the BU/MIT Student Innovations Law Clinic (formerly the Technology Law Clinic)—has been instrumental in preparing the faculty to work and teach in a post-AI world. Through its work, the committee has offered resources to help BU Law professors learn about the capabilities of AI and think through how to use it to the benefit of our students. The committee has also developed recommendations for student assessments that promote academic integrity within an AI environment.
The way we approach teaching the law in the post-AI world also requires continued engagement in interdisciplinary collaborations. BU Law has long been at the forefront of these efforts. For example, our health law faculty have a longstanding and highly productive partnership with their colleagues in the School of Public Health, and the intellectual property faculty have performed outstanding work with the Faculty of Computing & Data Sciences and the Rafik B. Hariri Institute for Computing and Computational Science & Engineering.
As we look to the future, we remain committed to fostering innovation within and among our BU Law community and across campus. Boston University has always been an innovative school. As the rise of AI progresses, we at BU expect to remain on that cutting edge.