大象传媒

Home
LEXplain

About LEXplain

LEXplain investigates the possibilities and limitations of using artificial intelligence to make legal decisions in public administration.

Main content

The latest artificial intelligence technologies are based on large language models built through machine learning. Their mode of operation is therefore not fully comprehensible to either ordinary users or experts. This creates a problem in relation to the duty of justification that public authorities must comply with when they make decisions about citizens' rights and duties.

The justification requirement must ensure that the citizen can understand the decision. At the same time, the justification requirement is a way to ensure the quality and legitimacy of public decision making. The justification requirement entails that administrative authorities must make an effort to inform and understand the citizen's situation in order to be able to apply the legal rules correctly and give the citizen certainty about this.

If the technology used in the case processing does not support the justification requirement, it cannot be used in a manner that is satisfactory under the rule of law. LEXplain will investigate how AI technology can be adapted to case processing and decision making in a way that ensures compliance with the legal obligation to provide justificatory reasons.

The project is a collaboration between lawyers and computer scientists from both Denmark and Norway and involves collaboration with tax and social authorities in both countries.

Project summary

The research focus in LEXplain is aimed at understanding how new hybrid AI technologies can be used to support legal decision-making by adapting them to the existing practice of providing justificatory explainability, which is atthe core of the rule of law. The project will aim to tackle the problems associated with contemporary AI technology, especially its inscrutable algorithms. Such black boxed technology used in the context of legal decision making, challenges several rule-of law ideals such as transparency in reasoning, accountability and relevancy of the explanation to the case at hand. In short, the use of AI for legal decision-making challenges law始s legitimacy. To better understand this problem and how it may be overcome, LEXplain investigates the legal explainability requirement in both historical, cross-jurisdictional and empirical dimensions and probes into how hybrid-AI, which combines Machine Learning with symbolic AI, might be a solution to the rule of law concerns associated with black boxed AI.

There is a strong need to better understand the relationship between XAI and legal justificatory explanation and how it might be possible to design a hybrid AI architecture that support legal reason-giving for individual decision-making. Investigating 鈥渉uman-in-the-loop鈥 approaches to legal decision-making, LEXplain will examine how public institutions can gain many of the advantages that can be had from AI, while still retain human control over the decision-making process and thereby uphold explainability and rule of law values.

The overarching aim of LEXplain is to create a new knowledge space, where AI explainability meets legal explainability in order to push the 鈥淴AI for law鈥 research frontier. To do so, LEXplain organizes its research around the following research question: How can legal justifcatory explainability be understood, supported and implemented in decision-making practices where AI is increasingly becoming available?

Research objectives

The primary objective of LEXplain is to establish new interdisciplinary knowledge on explainable AI (XAI) in the context of law by researching the explainability culture embedded in legal practice, as a basis for understanding how AI can support decision-making under the rule of law.


The secondary objective is to investigate how new forms of hybrid-AI systems can be used to support legal decision making by combining Large Language Models (LLMs) with knowledge and structure obtained from legislation, legal practice and other legal sources.

Research question and design

LEXplain will focus on AI recommendations in the context of individual legal decision-making in public administration in a rule of law. We find that this focus, rather than full AI automation, presents us with the most enriching field of research in terms of both societal and scientific impact. With a 鈥渉uman-in-the-loop鈥 approach to legal decision-making, public institutions can gain many of the advantages that can be had from AI, while still retaining human control over the decision-making process. LEXplain will pursue this approach via the investigation of how a new form hybrid-AI systems can be developed to support legal decision making by combining Large Language Models (LLMs) with knowledge and structure obtained from legislation, legal practice and other legal sources. 听

The overarching aim of LEXplain, then, is to create a new knowledge space, where AI explainability meets legal explainability in order to push the 鈥淴AI for law鈥 research frontier. To do so, LEXplain organizes its research around the following research question:听

RQ: How can legal justifcatory explainability be understood, supported and implemented in decision-making practices where AI is increasingly becoming available? 听

To research the interaction between AI systems for legal decision-making support and the justificatory explainability requirements pertaining to legal decision-making, LEXplain conducts an in-depth exploration of the relationship between legal and computational explainability. It does so through a three-dimensional inquiry into the RQ. Thus, LEXplain will seek to answer the RQ through three overlapping Research Streams.

1: Evolution and differentiation

2: AI explainability support听

3: Implementation and transformation