LEXam: Benchmarking Legal Reasoning on 340 Law Exams
- Date: Jul 14, 2025
- Time: 11:00 AM (Local Time Germany)
- Speaker: Yu Fan (ETH Zürich)
- Room: Basement
Long-form legal reasoning remains a key challenge for
large language models (LLMs) in spite of recent advances in test-time scaling.
We introduce LEXam, a novel benchmark derived from 340 law exams spanning 116
law school courses across a range of subjects and degree levels. The dataset
comprises 4,886 law exam questions in English and German, including 2,841
long-form, open-ended questions and 2,045 multiple-choice questions. Besides
reference answers, the open questions are also accompanied by explicit guidance
outlining the expected legal reasoning approach such as issue spotting, rule
recall, or rule application. Our evaluation on both open-ended and
multiple-choice questions present significant challenges for current LLMs; in
particular, they notably struggle with open questions that require structured,
multi-step legal reasoning. Moreover, our results underscore the effectiveness
of the dataset in differentiating between models with varying capabilities.
Adopting an LLM-as-a-Judge paradigm with rigorous human expert validation, we
demonstrate how model-generated reasoning steps can be evaluated consistently
and accurately. Our evaluation setup provides a scalable method to assess legal
reasoning quality beyond simple accuracy metrics. Project page: https://lexam-benchmark.github.io/
https://papers.ssrn.com/sol3/papers.cfm?abstract_id=5265144