Artificial Intelligence (AI) – Ethics or Regulations?
Artificial Intelligence is most familiar to people as the technology that makes robots independently conscious and potentially dangerous. Although created by humans, these independent robots are beyond our control and can do many things better than humans even in domains where we have always considered our abilities and mastery to be specifically human and part of our human identity.
There are psychological antecedents throughout human history for this disquiet and trepidation. In many early mythologies the assumption of godlike power by, or the imparting of godlike powers to, humans and human creations, was severely punished by spiritual superpowers, a God, Goddess, Pantheon, or human hubris itself. The examples of Daedalus and Icarus, Prometheus, and Pygmalion in Greek Mythology are cases in point. In the Gothic novel Frankenstein, by Mary Wollstonecraft Shelley, the monster created by Dr. Frankenstein kills his creator for refusing to do as he wishes.
Today we live in a world where artificial intelligence has become almost all-pervasive. However, the insinuation of AI and Machine Learning (ML) has not come in the shape of mechanical servants who threaten to take over our world and enslave us. Instead it has been hidden in services we have all come to take for granted. Our mobile phones use apps that enable services most have come to love. We have been seduced by the wonderful services offered to our minds and lifestyles. We can search for anything we need on the internet, whether information, goods or services of all types, and have it delivered almost instantaneously (in the case of digital download or web access) or very quickly shipped to us in 24 hours to a few days – even across the planet.
All these services we use collect data about us and everything we do online. They then use AI/ML algorithms to identify patterns in what we do, what we buy, what we like or don’t like, where we go, even what we read and write online. They use psychological profiles to predict what we might like and what we will pay (or pay more) for now and in the future. All of this data is used not only by the collectors but is also packaged and sold to anyone, or any entity, that will pay for it. These people, companies, or other entities, are then free to use this data – our data – in whatever way they like, nominally subject to legal agreements with the suppliers and laws, whether national or international, all of which have been shown again and again to be inadequate in protecting individuals, communities, societies, nation states (whether democracies or not), and even international blocs.
We have seen AI misused to identify vulnerable individuals for profit or to further political ambition. We have seen entire societies and nations manipulated, in supposedly free and fair democratic elections, to choose a minority candidate as head of state, minority parties to govern with an engineered majority, or even to choose a path both dangerous and harmful to, at least the short and medium term, good of the nation. The sixteen cases of political manipulation Cambridge Analytica owned up to, and publicized in their client literature, may be just the tip of the iceberg. We have been assured by numerous governments that there is no doubt of interference by foreign, and often hostile, nations in their internal affairs. It makes no difference whether the nation is large or small, a superpower or a supranational bloc.
What is going on? Are the creators of Artificial Intelligence (AI) out of control? Are developers and companies producing products, using and exploring the potential of AI, lacking in ethics? Is there a “clear and present danger” to [individuals, society, democracy, privacy, the world, (please enter your potential victim of choice)]? Is it desirable, even necessary perhaps, to regulate the use of AI going forward?
These questions have been asked before and not only since the development of modern computer powered AI which can be traced back to its foundations in the machine learning used to both create and decipher the German Enigma codes in the 1940s. The term Artificial Intelligence itself was coined in 1956 for the Dartmouth conference held by Professor John McCarthy to inaugurate AI as a research field in computer science. In 1942, the science fiction author and astrophysicist, Isaac Asimov formulated three laws of robotics in his short story Runaround. The three laws were designed to protect humans from harm through the agency of robots:
A robot may not injure a human being or, through inaction, allow a human being to come to harm.
A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law.
A robot must protect its own existence unless such protection conflicts with the First or Second Law.
He later introduced a fourth or 0th law that outranked the others:
0. A robot may not harm humanity or, by inaction, allow humanity to come to harm.
In the 1980s when AI was heralded as the 5th Generation of computing (some more recent chroniclers forget the Japanese 5th generation project began in the 8th decade of the 20th and ascribe it to the 2nd decade of the 21st century) there was a brief flurry of disquiet again about the dangers of machines taking over the world but when the promised (or feared) machine domination failed to materialize it died down again and became the domain of science fiction and Marvel comics.
As the third decade of the 21st century dawns, the questions have resurfaced with added urgency. We are now entering an era where the development of artificial intelligence, machine learning, and robotics have been pushed forward at an increasingly fast pace by the algorithmic advances in deep learning and improved neural networks combined with exponential increases in data gathering, storage, and computing power, that gives data scientists unprecedented abilities in data manipulation.
Data has now become one of the world’s most valuable and sought-after commodities. The European Union estimates that in 2020 the monetary equivalent value of EU data will reach €1 Trillion and in 2017 the World Economic Forum estimated the global figure then (2017) to be $3 Trillion. Estimates vary but the potential value of AI related revenue to the Irish economy by 2030 has been pegged at over €40 Billion.
The failure thus far of the major multinationals controlling the acquisition, use, and manipulation, of the majority of consumer data on the planet, to come up with a coherent approach to enforcing privacy and ethical standards has led to a situation where political entities feel forced to take a stand on behalf of their populations.
There are initiatives underway in many countries and political blocs to try to address the perceived issues around the use of AI. Countries like Australia, Canada, China, France, Finland, Germany, India, Japan, New Zealand, Singapore, South Korea, The United Kingdom and the United States, to name some of those in the vanguard, are in the process of developing AI ethical frameworks. The OECD and member countries, in May 2019, adopted a non-binding list of guidelines for AI development.
The European Union is following the principle that AI must be human-centric. Following the publication of the EU guidelines on ethics in AI, in April 2019, the Commission launched a pilot phase in June 2019 and invited all stakeholders to provide feedback on the practical implementation of the key requirements by the end of 2019. To this end, companies participating in the pilot will report on their experience in implementing the guidelines. Based on the feedback received, the High-Level Expert Group on AI will propose a revised version of the compliance assessment list to the Commission in early 2020. In addition, the European Commission President-elect, Ursula von der Leyen, has announced that she will put forward legislative proposals for a coordinated European approach on the human and ethical implications of AI within her first 100 days in office¹.
Does this mean the “wild west” days of the AI industry are drawing to a close and the sheriffs and marshalls are coming to round up the outlaws? Unfortunately, there are many jurisdictions still where there are no moves, or any appetite, in any direction – whether ethical or regulatory – to rein in the excesses of those misusing AI. Should we think along the ethical lines outlined above for regulation or self-regulation of AI? Do the concepts enshrined in laws, from those formulated by Asimov to the standards proposed by modern nation states and blocs, go far enough to ensure AI will do no harm?
As we know now from the constant hacking away at our legal systems by those seeking loopholes to exploit for profit, power, and self-aggrandizement, we need to be constantly vigilant and ask again at every juncture in the development of AI, “Are we doing enough to protect people, society, and especially the vulnerable, from exploitation?”
¹ From the European Parliament Briefing: EU Guidelines on ethics in artificial intelligence: Context and implementation, ©European Union, 2019
https://www.europarl.europa.eu/RegData/etudes/BRIE/2019/640163/EPRS_BRI(2019)640163_EN.pdf
© Tom Halton, 2019. All Rights Reserved.