You would all have seen the recent news items which followed the world championship “Go” contest in South Korea, won – for the first time ever – by an AI system vs a human Grand Master. This is significant as ‘Go’ is pretty much to chess in complexity what chess is to checkers…and so I thought it time to add my own observations. As the science and technology of AI develops, smart artefacts are increasingly being deployed and their widespread deployment will have profound ethical, psychological, social, economic, and legal consequences for human society and our planet. Here we can only raise, and skim the surface of, some of these issues – but as builders, as well as citizens, it is our duty to think on and act responsibly.
Artificial autonomous agents are, in one sense, simply the next stage in the development of technology. In that sense, the normal concerns about the impact of technological development apply, but in another sense they represent a profound discontinuity. Autonomous agents perceive, decide, and act on their own. This is a radical, qualitative change in our technology and in our image of technology. This development raises the possibility that these agents could take unanticipated actions beyond our control. As with any disruptive technology, there must be substantial positive and negative consequences – many that will be difficult to judge and many that we simply will not, or cannot, foresee.
Example 1 – Logical Answers
- We create an unarmed, all-knowing system to help with environmental problems. It analyses, global rainfall patterns, crop growth, population, ‘green’ programs etc;
- It concludes that we are the real problem and wipes us out, thus fixing pollution, war, poverty, preserving all planetary species etc – except one – in one go.
Example 2 – Too much Compassion
- An air-defence system refuses to fire on an incoming suicide plane, as that would harm the passengers (it was built to tackle missiles);
- The Govt is unable to implement new budget changes, as the Treasury System refuses to implement any measures which would make some humans ‘suffer’.
Example 3 – The Human Touch
- Financial systems begin to ‘co-operate’, manipulating markets to not only make profits (short term), but also to destroy any non-aligned markets and systems;
- A health system not only adjusts treatment advice (to ‘ease out’ those patients with a statistically small ‘life profile’ from hard-pressed budgets); it also manipulates the regular reports and analyses so that its primary actions remain hidden from human or ‘stupid’ system reviews...
So, we need AI systems which understand ‘ethics’, if you like by attempting to model conscience, not just consciousness. And it will need to be hard-wired (perhaps literally!), in the same way that we have DNA etc.
A convenient starting point for consideration would be the autonomous vehicle. Experimental autonomous vehicles are seen by many as precursors to robot tanks, cargo movers, and automated warfare. Although there may be, in some sense, significant benefits to robotic warfare, there are also very real dangers. Luckily, these are, so far, only the nightmares of science fiction (think Microsoft Windows 14 Bad Guy Killer – can’t see any problem there…).
There is always the optimistic view of such vehicles. The positive impact of having intelligent cars would be enormous. Consider the potential ecological savings of using highways so much more efficiently instead of paving over farmland. There is the safety aspect of reducing the annual carnage on the roads: it is estimated that 1.2 million people are killed, and more than 50 million are injured, in traffic accidents each year worldwide. Cars could communicate and negotiate at intersections. Besides the consequent reduction in accidents, there could be up to three times the traffic throughput. Elderly and disabled people would be able to get around on their own. People could dispatch their cars to the parking warehouse autonomously and then recall them later. There would indeed be automated warehouses for autonomous cars instead of using surface land for parking. Truly, the positive implications of success in this area are most encouraging. That there are two radically different, but not inconsistent, scenarios for the outcomes of the development of autonomous vehicles suggests the need for wise ethical consideration of their use.
AI is now maturing rapidly, both as a science and, in its technologies and applications, as an engineering discipline. Many opportunities exist for AI to have a positive impact on our planet’s environment. AI researchers and development engineers have a unique perspective and the skills required to contribute practically to addressing concerns of global warming, poverty, food production, arms control, health, education, the aging population, and demographic issues. We could, as a simple example, improve access to tools for learning about AI so that people could be empowered to try AI techniques on their own problems, rather than relying on experts to build opaque systems for them. Games and competitions based on AI systems can be very effective learning, teaching, and research environments, as shown by the success of RoboCup for robot soccer.
We have already considered some of the environmental impact of intelligent cars and smart traffic control. Work on combinatorial auctions, already applied to spectrum allocation and logistics, could further be applied to supplying carbon offsets and to optimizing energy supply and demand. There could be more work on smart energy controllers using distributed sensors and actuators that would improve energy use in buildings. We could use qualitative modelling techniques for climate scenario modelling. The ideas behind constraint-based systems can be applied to analyse sustainable systems. A system is sustainable if it is in balance with its environment: satisfying short-term and long-term constraints on the resources it consumes and the outputs it produces.
Assistive technology for disabled and aging populations is being pioneered by many researchers. Assisted cognition is one application but also assisted perception and assisted action in the form of, for example, smart wheelchairs and companions for older people and nurses’ assistants in long-term care facilities. However, there are warnings of some of the dangers of relying on robotic assistants as companions for the elderly and the very young. As with autonomous vehicles, researchers must ask cogent questions about the use of their creations.
Indeed, can we trust robots? There are some real reasons why we cannot yet rely on robots to do the right thing. They are not fully trustworthy and reliable; given the way they are built now. So, can they do the right thing? Will they do the right thing? What is the right thing? In our collective subconscious, the fear exists that eventually robots may become completely autonomous, with free will, intelligence, and consciousness; they may rebel against us as Frankenstein-like monsters.
What about ethics at the robot-human interface? Do we require ethical codes, for us and for them? It seems clear that we do. Many researchers are working on this issue. Indeed, many countries have come to realize that this is an important area of debate. There are already robot liability and insurance issues. There will have to be legislation that targets robot issues. There will have to be professional codes of ethics for robot designers and engineers just as there are for engineers in all other disciplines. We will have to factor the issues around what we should do ethically in designing, building, and deploying robots. How should robots make decisions as they develop more autonomy? What should we do ethically and what ethical issues arise for us as we interact with robots? Should we give them any rights? We have a human rights code; will there be a robot rights code?
To factor these issues, let us break them down into three fundamental questions that must be addressed. First, what should we humans do ethically in designing, building, and deploying robots? Second, how should robots ethically decide, as they develop autonomy and free will, what to do? Third, what ethical issues arise for us as we interact with robots?
In considering these questions, we shall consider some interesting, if perhaps naive, proposals put forward by the science fiction novelist Isaac Asimov, one of the earliest thinkers about these issues. His Laws of Robotics are a good basis from which to start because, at first glance, they seem logical and succinct. His original three Laws are:
- A robot may not harm a human being, or, through inaction, allow a human being to come to harm;
- A robot must obey the orders given to it by human beings except where such orders would conflict with the First Law;
- A robot must protect its own existence, as long as such protection does not conflict with the First or Second Laws.
Asimov’s approach to implementing the three questions posed above are as follows. First, you must put those laws into every robot and, by law, manufacturers would have to do that. Second, robots should always have to follow the prioritized laws. But he did not say much about the third question. Asimov’s excellent plots arise mainly from the conflict between what the humans intend the robot to do and what it actually does; or between literal and sensible interpretations of the laws, because they are not codified in any formal language. Asimov’s fiction explored many hidden implicit contradictions in the laws and their consequences. And if you think those are just stories and can be ignored, well, they form the basis of many current advanced robotics projects – simply because they’re the best that anyone has come up with so far!
There is much discussion of robot ethics now, but much of the discussion presupposes technical abilities that we just do not yet have. In fact, some AI thinkers were so concerned about our inability to control the dangers of new technologies that they have called, unsuccessfully, for a moratorium on the development of robotics (and AI), nanotechnology, and genetic engineering.
However, robotics may not even be the AI technology with the greatest impact. Consider the embedded, ubiquitous, distributed intelligence in the World Wide Web and other global computational networks. This amalgam of human and artificial intelligence can be seen as evolving to become a World Wide Mind. The impact of this global net on the way we discover and communicate new knowledge is already comparable to the effects of the development of the printing press. As Marshall McLuhan argued, “We have always first shaped the tools and thereafter our tools shape us”. Although he was thinking more of books, advertising, and television, this concept applies even more to the global net and autonomous agents. The kinds of agents we build, and the kinds of agents we decide to build, will change us as much as they will change our society; we should make sure it is for the better.
But will they stop at ‘just’ tools? The rules of evolution are as universal as physics – think of those birds in Africa which are allowed to peck ticks off Hippos, or morsels of food from the mouths of crocodiles; all manner of co-existing creatures in symbiosis. Ever since the first of our ancestors used a rock or twig to do ‘stuff’ they couldn’t do alone, we have been on this path – fire, agriculture, art, science, medicene have given us increasing abilities to change the world and now, soon, our very DNA. Any single, advanced planet in the universe is likely to have only one dominent species. Margaret Somerville is an ethicist who argues that the species Homo sapiens is evolving into Techno sapiens as we project our abilities out into our technology at an accelerating rate. Many of our old social and ethical codes are broken; they simply do not work in this new world. As creators of the new science and technology of AI, it is our joint responsibility to pay serious attention. The stuff of science fiction is rapidly becoming science fact, and make no mistake – we have a ‘tiger by the tail’.