In the News

Coverage, recognition, and awards from leading legal and business media.

Using AI for the Basics of Law
Bloomberg

Using AI for the Basics of Law

MPL Risk founder Charlie Hernandez appears on Bloomberg Radio discussing automation in legal tech.

Read More
ChatGPT Goes to Law School
Journal of Legal Education, Vol. 71, No. 3 (Spring 2022)

ChatGPT Goes to Law School

How well can AI models write law school exams without human assistance? To find out, we used the widely publicized AI model ChatGPT to generate answers to the final exams for four classes at the University of Minnesota Law School. We then blindly graded these exams as part of our regular grading processes for each class. Over ninety-five multiple-choice questions and twelve essay questions, ChatGPT performed on average at the level of a C+ student, achieving a low but passing grade in all four courses. After detailing these results, we discuss their implications for legal education and lawyering. We also provide example prompts and advice on how ChatGPT can assist with legal writing.

Read More
Los Angeles Lawyer Cover Article
Los Angeles Lawyer Magazine

Los Angeles Lawyer Cover Article

Cover story on Charlie Hernandez and the role of AI in law.

Read More
MPL Group on Fox Business News
Fox Business

MPL Group on Fox Business News

Charlie Hernandez featured on Fox Business News discussing how AI is changing the legal profession.

Read More
LegalBench: A Collaboratively Built Benchmark for Measuring Legal Reasoning in Large Language Models
Proceedings of the Conference on Neural Information Processing Systems Track on Data and Benchmarks

LegalBench: A Collaboratively Built Benchmark for Measuring Legal Reasoning in Large Language Models

The advent of large language models (LLMs) and their adoption by the legal community has given rise to the question: what types of legal reasoning can LLMs perform? To enable greater study of this question, we present LegalBench: a collaboratively constructed legal reasoning benchmark consisting of 162 tasks covering six different types of legal reasoning. LegalBench was built through an interdisciplinary process, in which we collected tasks designed and hand-crafted by legal professionals. Because these subject matter experts took a leading role in construction, tasks either measure legal reasoning capabilities that are practically useful, or measure reasoning skills that lawyers find interesting.

Read More
Legalweek 2024
Law.com / ALM LegalWeek

Legalweek 2024

Key speech delivered at ALM LegalWeek on future of AI in compliance.

Read More
AI Tools for Lawyers: A Practical Guide
108 Minnesota Law Review Headnotes 1 (2023)

AI Tools for Lawyers: A Practical Guide

This Article provides lawyers and law students with practical and specific guidance on how to effectively use AI large language models (LLMs), like GPT-4, Bing Chat, and Bard, in legal research and writing. Focusing on GPT-4 – the most advanced LLM that is widely available at the time of this writing – it emphasizes that lawyers can use traditional legal skills to refine and verify LLM legal analysis. In the process, lawyers and law students can effectively turn freely-available LLMs into highly productive personal legal assistants.

Read More
Florida Legal Awards Innovator of the Year
Daily Business Review (Law.com)

Florida Legal Awards Innovator of the Year

MPL Group recognized as the 2024 Legal Innovator of the Year.

Read More
Tech Shifts & the Law
Los Angeles Lawyer Feature Article

Tech Shifts & the Law

Understanding historical patterns of technology adoption presages how AI will be integrated into legal practice.

Read More
Artificial Intelligence and the Law
NBC

Artificial Intelligence and the Law

NBC spotlights MPL Group's legal technology.

Read More
Lawyering in the Age of Artificial Intelligence
109 Minnesota Law Review (Forthcoming 2024)

Lawyering in the Age of Artificial Intelligence

We conducted the first randomized controlled trial to study the effect of AI assistance on human legal analysis. We randomly assigned law school students to complete realistic legal tasks either with or without the assistance of GPT-4, tracking how long the students took on each task and blind-grading the results. We found that access to GPT-4 only slightly and inconsistently improved the quality of participants' legal analysis but induced large and consistent increases in speed.

Read More
New York Law Journal Professional Excellence Award
New York Law Journal

New York Law Journal Professional Excellence Award

MPL Group named among finalists for NYLJ's 2024 innovation awards.

Read More
How Will AI Affect the Legal Profession?
KTLA 5

How Will AI Affect the Legal Profession?

Morning news segment showcasing AI use for legal documents.

Read More
Off-the-Shelf Large Language Models Are Unreliable Judges
Working Paper

Off-the-Shelf Large Language Models Are Unreliable Judges

I conduct the first large-scale empirical experiments to test the reliability of large language models (LLMs) as legal interpreters. Combining novel computational methods with the results of a new survey, I find that LLM judgments are highly sensitive to prompt phrasing, output processing methods, and choice of model. I also find that frontier LLMs do not accurately assess linguistic ordinary meaning, and I provide original evidence that this is in part due to post-training procedures. These findings undermine LLMs' credibility as legal interpreters and cast doubt on claims that LLMs elucidate ordinary meaning.

Read More
Simplifying Legal Documents with AI
Good Day LA

Simplifying Legal Documents with AI

Feature segment covering MPL Group's platform for small businesses.

Read More
AI Assistance in Legal Analysis: An Empirical Study
73 Journal of Legal Education (forthcoming, 2024)

AI Assistance in Legal Analysis: An Empirical Study

Can artificial intelligence (AI) augment human legal reasoning? To find out, we designed a novel experiment administering law school exams to students with and without access to GPT-4, the best-performing AI model currently available. We found that assistance from GPT-4 significantly enhanced performance on simple multiple-choice questions but not on complex essay questions. We also found that GPT-4's impact depended heavily on the student's starting skill level; students at the bottom of the class saw huge performance gains with AI assistance, while students at the top of the class saw performance declines. This suggests that AI may have an equalizing effect on the legal profession, mitigating inequalities between elite and nonelite lawyers.

Read More
Forbes Highlights MPL Group
Forbes

Forbes Highlights MPL Group

Forbes feature on how MPL Group is bringing AI legal help to the business community.

Read More
How to Use Large Language Models for Empirical Legal Research
Journal of Institutional and Theoretical Economics (Forthcoming)

How to Use Large Language Models for Empirical Legal Research

Legal scholars have long annotated cases by hand to summarize and learn about developments in jurisprudence. Dramatic recent improvements in the performance of large language models (LLMs) now provide a potential alternative. This Article demonstrates how to use LLMs to analyze legal documents. It evaluates best practices and suggests both the uses and potential limitations of LLMs in empirical legal research. In a simple classification task involving Supreme Court opinions, it finds that GPT-4 performs approximately as well as human coders and significantly better than a variety of prior-generation NLP classifiers, with no improvement from fine-tuning.

Read More
In Defense of the Billable Hour: A Monitoring Theory of Law Firm Fees
South Carolina Law Review, Volume 70, Issue 2, Winter 2018

In Defense of the Billable Hour: A Monitoring Theory of Law Firm Fees

The billable hour has an image problem. It has become a scapegoat for all the unpleasantries of law firm life: long hours, dull work, cantankerous clients. Associates see the billable hour as the most visible symbol of their bondage, an arch-capitalistic machine that transforms time into dollars and bright-eyed young lawyers into fee-producing zombies.

Read More
Prose and Cons: Evaluating the Legality of Police Stops with Large Language Models
Working Paper

Prose and Cons: Evaluating the Legality of Police Stops with Large Language Models

In the near future, algorithms may assist law enforcement with real-time legal advice. We take a step in this direction by evaluating how well current AI can perform legal analysis of the decision to stop or frisk pedestrians, comparing multiple algorithmic and non-algorithmic approaches. We find that large language models (LLMs) can accurately assess reasonable suspicion under Fourth Amendment standards.

Read More
Interrogating LLM design under a fair learning doctrine
2025 ACM Conference on Fairness, Accountability, and Transparency

Interrogating LLM design under a fair learning doctrine

The current discourse on large language models (LLMs) and copyright largely takes a "behavioral" perspective, focusing on model outputs and evaluating whether they are substantially similar to training data. However, substantial similarity is difficult to define algorithmically and a narrow focus on model outputs is insufficient to address all copyright risks. In this interdisciplinary work, we take a complementary "structural" perspective and shift our focus to how LLMs are trained.

Read More
Measuring Clarity in Legal Text
The University of Chicago Law Review, Vol. 91, No. 1 (January 2024)

Measuring Clarity in Legal Text

Legal cases often turn on judgments of textual clarity: when the text is unclear, judges allow extrinsic evidence in contract disputes, consult legislative history in statutory interpretation, and more.

Read More