Trustworthy artificial intelligence and human rights

AutorMigle Laukyte
M L
Carlos III University of Madrid, Spain
In April 2019, the High-Level Expert Group on Artif‌i cial Intelligence
(AI), set up by the European Commission, published its Ethics Guidelines
for Trustworthy Artif‌i cial Intelligence (henceforth Guidelines),1 which
address the future of AI development in the European Union (EU). In
particular, these Guidelines lay out a vision of AI that Europe should
foster, identifying the features that any AI-based system ought to have.
The framework comprises three parts dedicated to three characteristics
that should define AI, namelyŚ it ought to be lawful (i.e., legally
compliant), ethical, and robust. The Guidelines focus on ethical and
robust AIŚ they do not address legal compliance in AI. Although human
rights issues pertain to this latter part, and their legal implications are
therefore not dealt with in the Guidelines, they still f‌i gure prominently in
the part dedicated to ethical AI, because human rights are not only legally
enforceable but are also “special moral entitlements of all individuals
arising by virtue of their humanity” (Guidelines 2019, fn. 12, 7). It is no
surprise, then, if one of the key ideas these Guidelines introduce is that
of a fundamental rights-based approach to AI.
The aim of this paper is take a closer look at how the Guidelines
address human or fundamental rights.2 This analysis should also help us
1 Available at
2 Although I do appreciate the difference between the two terms–“fundamental rights”
and “human rights”–they will be used interchangeably in this discussion.

Para continuar leyendo

Solicita tu prueba

VLEX utiliza cookies de inicio de sesión para aportarte una mejor experiencia de navegación. Si haces click en 'Aceptar' o continúas navegando por esta web consideramos que aceptas nuestra política de cookies. ACEPTAR