The European Group on Ethics in Science and New Technologies (EGE): Statement on Artificial Intelligence, Robotics and ?Autonomous' Systems
Advances in AI, robotics and so-called ‘autonomous’ technologies1have ushered in a range of increasingly urgent and complex moral questions. Current efforts to find answers to the ethical, societal and legal challenges that they pose and to orient them for the common good represent a patchwork of disparate initiatives. This underlines the need for a collective, wide-ranging and inclusive process of reflection and dialogue, a dialogue that focuses on the values around which we want to organise society and on the role that technologies should play in it.
This statement calls for the launch of a process that would pave the way towards a common, internationally recognised ethical and legal framework for the design, production, use and governance of artificial intelligence, robotics, and ‘autonomous’ systems. The statement also proposes a set of fundamental ethical principles, based on the values laid down in the EU Treaties and the EU Charter of Fundamental Rights, that can guide its development.
The first two decades of the 21st century have brought us striking examples of what is commonly referred to as ‘autonomous technology’ and ‘artificial intelligence’. Self-driving cars and drones, robots in deep sea and space exploration, weapon systems, software agents, such as bots in financial trade, and deep learning in medical diagnosis, are among the most prominent, but certainly not the only examples. Artificial intelligence (AI), especially in the form of machine learning, and the increasing availability of large datasets from various domains of life are important drivers of these developments. The confluence of these digital technologies is rapidly making them more powerful, they are applied in an increasing number of new products and services, in public and private sectors, and can have both military and civilian application. The AI lodged in these systems can redefine work or improve work conditions for humans and reduce the need for human contribution, input and interference during operation. It can help to assist or replace humans with smart technology in difficult, dirty, dull or dangerous work, and even beyond.
Without direct human intervention and control from outside, smart systems today conduct dialogues with customers in online call-centres, steer robot hands to pick and manipulate objects accurately and incessantly, buy and sell stock at large quantities in milliseconds, direct cars to swerve or brake and prevent a collision, classify persons and their behaviour, or impose fines.
It is unfortunate that some of the most powerful among these cognitive tools are also the most opaque. Their actions are no longer programmed by humans in a linear manner. Google Brain develops AI that allegedly builds AI better and faster than humans can. AlphaZero can bootstrap itself in four hours from completely ignorant about the rules of chess, to world champion level. It is impossible to understand how exactly AlphaGo managed to beat the human Go World champion. Deep learning and so-called ‘generative adversarial network approaches’ enable machines to ‘teach’ themselves new strategies and look for new evidence to analyse. In this sense, their actions are often no longer intelligible, and no longer open to scrutiny by humans. This is the case because, first, it is impossible to establish how they accomplish their results beyond the initial algorithms. Second, their
performance is based on the data that have been used during the learning process and that may no longer be available or accessible. Thus, biases and errors that they have been presented with in the past become en-grained into the system.
When systems can learn to perform these tasks without human direction or without supervision, they are now often called ‘autonomous’. These socalled ‘autonomous’ systems can manifest themselves as high-tech robotic systems or as intelligent software such as bots. Many of them are released into the world unsupervised and may accomplish things which are not foreseen by their human designers or owners.
We thus see the following relevant developments in technology:
1) Artificial Intelligence in the form of machine learning (especially ‘deep learning’), fuelled by Big Data, is rapidly becoming more powerful. It is applied in an increasing number of new digital products and services in public and private sectors and can have both military as well as civilian application. As noted, AI’s inner workings can be extremely hard - if not impossible - to track, explain and critically evaluate. These advanced capabilities are accumulating in large part with private parties and are for a large part proprietary.
2) Advanced mechatronics (a combination of AI and deep learning, data science, sensor technology, Internet of Things, mechanical and electrical engineering) is providing a wide range of increasingly sophisticated robotic and high-tech systems for practical applications in service and production industry, health care, retail, logistics, domotics (home automation) and security and safety. Two domains of application that stand out in public debates are robotic weapons systems and ‘autonomous’ vehicles.
3) Ever smarter systems are produced that exhibit high degrees of what is often referred to as ‘autonomy’, which means that they develop and can perform tasks independently from human operators and without human control.
4) There seems to be a push for ever higher degrees of automation and ‘autonomy’ in robotics, AI and mechatronics. Investments of countries and large companies in this field are enormous and a leading position in AI research is among the prominent goals of superpowers in the world.
5) There is development towards ever closer interaction between humans and machines (co-bots, cyber-crews, digital twins and even the integration of smart machines into the human body in the form of computer-brain interfaces or cyborgs). Similar developments can be seen across the AI realm. Well aligned teams of AI systems and human professionals perform better in some domains than humans or machines separately.
The advent of high-tech systems and software that can function increasingly independently of humans and can execute tasks that would require intelligence when carried out by humans, warrants special reflection. These systems give rise to a range of important and hard moral questions.
First, questions about safety, security, the prevention of harm and the mitigation of risks. How can we make a world with interconnected AI and ‘autonomous’ devices safe and secure and how can we gauge the risks?
Second, there are questions about human moral responsibility. Where is the morally relevant agency located in dynamic and complex socio-technical systems with advanced AI and robotic components? How should moral responsibility be attributed and apportioned and who is responsible (and in what sense) for untoward outcomes? Does it make sense to speak about ‘shared control’ and ‘shared responsibility’ between humans and smart machines? Will humans be part of ecosystems of ‘autonomous’ devices as moral ‘crumple zones’, inserted just to absorb liability or will they be well placed to take responsibility for what they do?
Third, they give rise to questions about governance, regulation, design, development, inspection, monitoring, testing and certification. How should our institutions and laws be redesigned to make them serve the welfare of individuals and society and to make society safe for this technology?
Fourth, there are questions regarding democratic decision making, including decision making about institutions, policies and values that underpin all of the questions above. Investigations are carried out across the globe to establish the extent to which citizens are taken advantage of by the use of advanced nudging techniques based on the combination of machine learning, big data and behavioural science, which make possible the subtle profiling, micro-targeting, tailoring and manipulation of choice architectures in accordance with commercial or...
Para continuar leyendoSolicita tu prueba