A.I. In Law: an article by GF Willmetts.
Science fiction author Isaac Asimov was not happy with early robot stories and film depictions. Quite understandably, because robots were out of control for no reason. He saw them as machines with built-in restrictions and decided to balance things out by setting up innate protocols which mostly governed and prevented them from hurting their human masters and made them obedient. In his books ‘I, Robot’ and ‘The Rest Of The Robots’ anthologies, Asimov’s stories rotated around the apparent breaking of these rules.
For those who don’t know them, see below:-
Asimov’s Three Laws Of Robotics
- A robot may not injure a human being or, through inaction, allow a human being to come to harm.
- A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
- A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.
As robot intelligence developed in our world, even Asimov conceded that he didn’t know how these laws could be interpreted in silicon brains in real life, let alone the positronic sponges he used in the robot brains of the US Robots and Mechanical Men.
In law, barristers would probably want these laws better clarified if they could exist. Take the first law. The robot must knowingly know it is killing a human being. If it were obeying a human and simply pressing a button, then it would have no idea what its action could cause.
The same with the second law. A robot must obey human orders, but if it receives different orders from different humans and they conflict, then how does it choose outside of not harming a human? Should we also address what constitutes being a human and how that is determined? If there are tight criteria, would it be based on the level of intelligence rather than merely human shape? As I said above, even Asimov himself questioned his own laws, and when he merged his robot reality with the ‘Foundation’ future, he had removed all robots but one, but that’s a spoiler.
We haven’t even tackled emotional injury or how that can be interpreted. It would certainly prevent a robot from firing a human from a job.
At least some thought had been given to governing the robots of his reality. Protect humans, obey orders and, lastly, only then protect themselves. In this modern age of artificial intelligence (AI) in its nascent stages, there is no command structure to safeguard humans. This isn’t to make it a nanny state, but it does raise problems of how to ensure there is some safety without risking the likes of Skynet evolving and deciding humans are just too much bother.
This goes back to the problem above. How do you define ‘harm’, and can it be accepted as law in the various countries across the world? AIs, at present, aren’t installed in robot bodies, so they are limited to computer software except when they control other tools such as medical analysis equipment or drones. To give an armed drone autonomy with no override control would be foolish because it may not be able to tell the difference between friend and foe, let alone bystanders. Equally, the possibility of the enemy getting access to this code and turning the drone on its makers shows this can’t be inflexible. Both sides will be determining how they can defeat their opponents’ drones.

The thing is, where do we start? How can we teach AI ethics when humans have a reputation for not obeying it either? The current levels of Internet AI have made mistakes with racism and misogyny, let alone directed children to pornography. How can you apply ethics to that when it is so easy to turn off parental guidance for the young computer savvy? How does an AI react to a human giving a racist comment when asking a question? Tell him or her not to say that, or just go to the question actually asked?
Does the AI then evaluate the kind of person it is talking to and whether it should be reported or considered as a private conversation? As a generalisation, the responses could be programmed in with some latitude on privacy until they’re released by the person involved publicly. Note, though, none of this process is based on ethics or personal or public safety. The AI will obey an instruction without having to know what it means.
Let’s look at something that is a current problem. Should the response be based on looking at taboo words or their context, and how does it understand the latter? We’ve seen or heard of the results in social media, where even if a name has an apparently taboo word in a surname, it is already redlit, as with Hancock, Cockfosters, etc. Only the Illuminati knows how an AI would react to obscure swear words or parts of the anatomy or racial slurs, and that’s only in language that isn’t supposed to be offensive. To the current AI, it’s just words without it recognising context and meaning.
There are so many different ways we can go from here. If we were to give an AI legal skills so it stays within the laws of any country, then you would have to give it them all. Now I can’t speak for other countries, but I do know the UK has changed over the decades; some laws have become obsolete, and, although they haven’t been recanted, they are ignored. I doubt an AI would know this but would face some contradictions and question what is really ‘legal’. It might ensure many countries looking at all their laws and tidy them up but it takes time to repeal laws than just ignoring them. There’s a distinct possibility that some form of AI will scan the laws and look for any that have not been used in recent decades. Humans are as much a problem as AIs here.
This still does not show an AI can select between good and bad people, let alone anyone who is just curious and wants information. I doubt if even a lie detector would be able to tell the difference. Any information can be turned into malicious intent, even for a street map if we think of the Novacek deaths. Asking about specific ingredients used in bomb making should trigger an alert even if only one is used for fertiliser. Even so, this also has the hallmarks of a nanny state controlled by AIs rather than free will or humans, subject to how much control we leave to these AI algorithms. One thing that should always be remembered: an AI is only your friend because it is programmed that way, not because it really is. We haven’t even got that way yet.
Oddly, the second two laws are more like conditions for the first law to function. They limit what the robot can do, which, as I’ve pointed out, is ‘knowingly’ hurt a human, and the third consideration would also apply. How would a robot know it was injuring itself or being damaged, and how far could it let things go before it knows it is destroying itself?
How to program such protocols is tough because if the robot was AI competent we would still have to explain to it what it can and can’t do. If the AI robot assesses that any action it does could result in injury to a human, it might automatically decide to shut down. It certainly should have that as an option if there is too much contradiction. It would take a clever human to phrase an order that would not sound like an order to hurt anyone.
To have an AI robot analyse the question before committing itself wouldn’t be a problem, as it can certainly work faster than we could. Equally, we would expect it to ask questions when it needs qualification. In that case, it would be the human questioner giving an accurate question. If an AI or AI robot were to be given any autonomy, it would have to be in a human-free environment with precise instructions as to its activities.
That being the case, maybe we ought to work out from what function the AI is supposed to be doing and how its protection of humans would work and look for the pitfalls. Take medicine. How can an AI do the work of a doctor? Placating a patient who is afraid of needles and a needed inoculation and saying, ‘This won’t hurt’ when there is likely to be an ‘ouch’ reaction. Equally, telling a patient that a medicine won’t taste nasty when it surely will.
Quite how an AI can be trained in the calm patience and occasional white lie to treat a patient when there are some elements of pain is a key problem. There are some tasks that will need a lot of programming or at least giving a lot of options to choose from so that it can work, or the AI would be sedating every patient it is working on.
So how does an AI differentiate levels of hurt in a human? Certainly the AIs that developed racist remarks from human conversations couldn’t tell the difference, and you would have thought that could at least have been employed in their programming.
With mostly autonomous cars driving people around, there has been a lot of speculation on how they are supposed to react to situations where either the passengers or pedestrians are killed, as if there is only a two-choice system. I mean, none of the examples I’ve read have ever considered the car might just stop or at least go up an embankment if it couldn’t brake. In terms of car safety, I doubt if they would exceed the known speed limit or overtake at the right time. If anything, if it could speak, an AI-controlled car would deem human-driven cars as the road hazards.
The use of AI is at least considering areas of human safety, which are being treated more seriously than in science fiction. Although, to be fair, most stories are set in a future where the bugs in the programming have already been sorted out.
Looking at the list of what AI is employed in already, I still think it’s an overused word. In things like manufacturing and medicine, it’s more a case of giving the software perimeters and letting it go through the procedures, as it can be faster and more accurate than humans without getting bored or tired. It reduces the number of results it needs to show to humans to the ones that should be of concern, and new tests can be introduced as needed. But is it genuine AI? Can it be independently inventive or come up with different results or solutions to something it finds? I haven’t heard that happening yet, so it’s probably just clever programming rather than actual AI using the improved visual hardware.
I’m definitely less sure about using it in creative activities. Looking at the examples in Google, it can’t give accurate information with photo resources, even when enough information is given for an accurate assessment. Would you trust it on text unless you knew the answers already? Adults might realise this, but you can bet school kids won’t.
We still need to define what is true AI because it is bandied about for all sorts of activities. As pointed out above, with certain medical photograph examinations, it is down to elaborate programming rather than independent thinking. A lot of activities can be done simply by covering all the options. When I was working back in the 1980s and solely in BASIC, I covered all the options, and some of the people were convinced the software had a mind of its own because I had anticipated all the inputs and what to do under their circumstances. It’s not that difficult to make any software look more elaborate than it is.
For an AI to create its own code, it would be more a question of filling in gaps in existing code. The equivalent of the options line with the data stored in a database. Easy to add and remove as needed, but it’s not independent thought. The software would only be capable of matching criteria without understanding why. It wouldn’t be able to match different things in its database and create a viable different solution. If enough options for what is presented to it are available, then it can find a match. The so-called AI used in search engines works on that principle, but it is really only doing a word search, which is why JPGs that are incorrect keep popping up when you want to find a match. Applying the same to information, then I certainly wouldn’t trust it for total accuracy when it gives multiple different answers.
Even so, it’s not as though it’s an independent artificial intelligence. I doubt if we’re even close to making something that could parallel the HAL 9000.
Whatever the options, don’t just depend on an AI as the font of all knowledge. A simple person you want to see photographs of, and it gives people that are not them and shows how hit and miss it is. It is better to have good general knowledge and use it for confirmation. Although an AI could be configured not to deliberately kill someone, it would not be able to stop you from using its information for you to do it for yourself, which is much more worrying.
You would think science fiction writers and fans would all be in favour of AI, but I tend to think we’re more aware of its dangers. Even with Asimov’s interference stopping robots from going on the rampage with no logical reason, it’s the ones with logical reasoning that are far more dangerous, as witnessed by the likes of Colossus and Skynet. To give any AI total autonomy without some form of kill switch or several means where possible would be foolish if things got out of hand.
Currently, the promotion of AI is greater than the results. Even the ones currently in use have their own limitations simply because they don’t have perimeters set correctly. I’ve tended to steer away from the likes of Alexis simply because they would limit what I am looking for rather than allow me to widen my knowledge. The fact that companies building AI-type software want to get it on the market as quickly as possible and are no doubt interested in seeing how it copes with humans does make me think they are rushing.
© GF Willmetts 2025
All rights reserved.
Ask before borrowing.
