Autonomous #Drones, morals, ethics, and the #Law. A Jurisprudential considerations, PART II
This a continuing working paper to explore how legally and morally we ought to be more socially responsible when developing technology before we create potential monsters, and killing machines capable of making decisions for themselves, without human intervention.
This article was sparked by recent news in the military arena that there exists a drone with limited artificial intelligence, able to make decisions to destroy or not to destroy what becomes an enemy target.
In Part I, I referred to social media platforms and the need to re-think how we regulate such matters. In doing so, I adopted a Jurisprudential normative positive approach.
I continue with the same approach, accepting that this is not by any means the only approach, and that different schools of Jurisprudential thought may well result in different outcomes.
The crux of this paper continues to be that technology is developing faster than our moral and legal considerations towards it, resulting in an element of instability, and disharmony in society.
SCOPE OF THIS CONTINUING PAPER:
I am an avid fan of technology, but not a technical person, so my knowledge of how these things actually work and what they do, are limited. I do, however, have a working knowledge of military tactics and strategy, and an eclectic remote interest in science-fiction, how this has impacted upon Society, and what our Law perhaps ought to be on such matters. This working paper/article, does not seek to consider vicarious liability other than to say, a dog, for example, which does wrong, is sometimes destroyed, and the owner is punished under the Dangerous Dogs Act, because it was not properly controlled. Equally, a rogue robot would necessarily be destroyed, and the Corporation punished.
A long time ago, in a Galaxy far, far away…
The legendary opening words of the Star Wars movies. Yet, that Galaxy of space ships seems a lot closer, given the way technology is developing.
Take Star Trek for instance, when all the gadgetry seemed remote, and far-fetched, and yet here we are in our modern World, taking many of these developments for granted without thinking where they originated. (I.e. Phasers/taser-guns, Communicators/Mobile phones, computers/actual computers). Even Professor Brian Cox @BrianCox commented on the Rob Bryden show that teleporting has been done on a micro-scale, and is possible at least in theory, on a larger scale.
Do you remember, (I stand to be corrected), whether there were any autonomous robots/drones in Star Trek, or Star Wars? Sure, there was the space-ship which returned from deep space, having been tampered with by alien technology, and sought its maker. That, killed things in its way, because it was on a mission to go to its source, and ultimately bonded with man and found peace.
Then, in later science-fiction settings, there was Lost in Space, when family Robinson suffered a killer robot, when re-wired.
Then of course there were the Decepticons from Transformers.
However, more sinister, and perhaps veering towards the dark side of what should be our biggest nightmare, were 2 poignant films: Robocop, where on the one hand, man is half man, half robot, and other machines without the human element, where they malfunction and destroy half the room.
What did those machines lack? A conscience; A human element.
In Yevgeny Zamyatin’s book, ‘We’, a dystopia created following a 200 year War after the World’s resources ran out, a World of glass cubes is created, which centres around the purity of mathematics and equations to decide optimum times to work, to sleep, to eat, etc..This was a book significantly pre-dating 1984 set just after World War I, and the start of the communist revolution in Russia.
The main character, and others, started to reject the happy-clappy perfect society, and developed a soul; A conscience, causing a mini-revolution, claiming that things were finite, and not infinite. People have a choice. They should not operate simply upon a command without occasionally questioning whether what they do is right.
Shortly after this, however, when science-fiction was…still science-fiction, Isaac Asimov pondered Robotics and how it may affect humanity in 1942 when he wrote his story entitled, ‘Runaround’.
He was prophetic in his considerations. He was not guided by mass-production, and profit-making consumerism. Rather, he was concerned that robots may be used as weapons of destruction at a time of course when the World was it war, and horrific acts were being carried out against humanity.
His Law of #Robotics was based upon what was socially acceptable and therefore incapable of being used against society. He was interested in protecting humanity, and re-considering altruistic socially accepted views to find a new norm. He considered further, from that stance, that Law exists to protect such values. That being said, he created 3 Laws of Robotics as follows:
1. A robot may injure a human being or, through inaction, allow a human being to harm; 2. A robot must obey orders given it by human beings except where such orders would conflict with the 1st law; 3. A robot must protect its own existence as long as such protection does not conflict with the 1st or 2nd law.
Later still, in 1985, he felt his initial 3 Laws were insufficient and he created a prequel in his later 1985 book entitled ‘Robots and Empire’. In that book, he created ‘Zeroth’ Law, to which other Laws were subordinate.
Zeroth Law: A robot may not injure humanity or, through inaction, allow humanity to come to harm.
What considerations have truly been given to autonomous robots?
Some years ago, when drones flew the skies, there was controversy then. They were nevertheless guided and controlled with a human element. The decision to press the trigger was not for a machine to make. It was a human who made that decision.
My concerns are threefold:
1. Equations and algorithms are ever-complex, and there are many situations and circumstances we could imagine, which could be placed into a computer programme. What about, the unknown situations?
An example: A drone has been sent to destroy an enemy tank making its way towards a home base. The markings to the computer, identify the tank as enemy, but the troops inside, are in fact friendly special forces, having captured the tank. Does the computer know this? Would a human know this? The troops have some communications and try to make contact with the drone, but to no avail, the drone only responds to central command. The troops realise it is a drone, and hoist a white flag. The drone does to identify or recognise the white flag as a sign of anything of concern and continues its mission. Children seeing the tank, run out of hiding, realising it is their salvation as the troops inside our friendly, and have been walking behind them. The drone only sees the tank, and not the bigger picture. It has no concept of anything other than the mission itself. It is a Doomsday scenario. Human considerations on the front line, give a soldier options as to whether or not to engage. It is that human conscious element, which makes us….human and not machines.
2. What if drones or other autonomous robots fall into enemy hands, or are infected with viruses so that the drones become rogue?
It must surely be conceivable. If worms can get into computers running nuclear reactors of other countries, this must surely be a consideration and a concern?
3. What if a computer gains so much information and data, that it achieves a form of artificial intelligence no longer needing human intervention? Not necessarily a conscience as humans have, but a way and a means to understand an objective to survive at the expense of humanity?
Far-fetched? Maybe. Conceivable? Absolutely!
Did humanity think it was far-fetched to fly to the moon?….but it happened though.
What about Terminator? Where machines achieve consciousness, and try to destroy humanity?
The writer believes that these things should be properly considered now. The Law should be in place NEVER to allow autonomous drones. It is a step too far in the wrong direction, with unimaginable horrors awaiting us.
The entire project and concept of autonomous robots should be scrapped. We need to re-assess what is important to us. Asimov’s 4 Laws must necessarily be a good starting point.
…or maybe I have this all wrong, and I am working too hard…
Professor David Rosen is a Solicitor-Advocate, Partner and head of Litigation at Darlingtons Solicitors. He is an Associate Professor of Law at #Brunel University, and a member of the Society of Legal Scholars.