ComputersOffworld Report

AI In Law : an article by: GF Willmetts

Science Fiction author Issac Asimov was not happy with early robot stories and film depictions. Quite understandably because robots were out of control for no reason. He saw them as machines with built-in restrictions and decided to balance things out by setting up innate protocols which mostly governed and prevented them hurting their human masters and obedient. In his books ‘I, Robot’ and ‘The Rest Of The Robots’ anthologies, Asimov’s stories rotated around apparent breaking of these rules.

For those who don’t know them, see below:-

Asimov’s Three Laws Of Robotics

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.

As robot intelligence developed in our world, even Asimov conceded that he didn’t know how these laws could be interpreted in silicon brains in real life let alone the positronic sponges he used in the robot brains of the US Robots and Mechanical Men.

In law, barristers would probably want these laws better clarified if they could exist. Take the first law. The robot must knowingly know it is killing a human being. If it was obeying a human and simply pressing a button, then it would have no idea what its action could cause.

The same with the second law. A robot must obey human orders but if it receives different orders from different humans and they conflict then how does it choose outside of not harming a human? Should we also address what constitutes being a human and how that is determined? If there are tight criterias, would it be based on the level of intelligence than merely human-shape? As I said above, even Asimov himself questioned his own laws and when he merged his robot reality with the ‘Foundation’ future, he had removed all robots but one but that’s spoiler.

We haven’t even tackled emotional injury or how that can be interpreted. It would certainly prevent a robot firing a human from a job.

At least some thought had been given to govern the robots of his reality. Protect humans, obey orders and, lastly, only then protect themselves. In this modern age of artificial intelligence (AI) in its nascent stages there is no command structure to safeguard humans. This isn’t to make it a nanny state but it does raise problems of how to ensure there is some safety or risking the likes of Skynet evolving and deciding humans are just too much bother.

This goes back to the problem above. How do you define ‘harm’ and can it be accepted as law in the various countries across the world.? AIs, at present, aren’t installed in robot bodies so are limited to computer software except when they control other tools such as medical analysis equipment or drones. To give an armed drone autonomy with no override control would be foolish because it may not be able to tell the difference between friend and foe let alone bystanders. Equally, the possibility of the enemy getting access to this code and turning the drone on its makers shows this can’t be inflexible. Both sides will be determining how they can defeat their opponents’ drones.

Thing is where do we start? How can we teach an AI ethics when humans have a reputation for not obeying them neither? The current levels of Internet AI have made mistakes with racism and misogyny, let alone directed children to pornography. How can you apply ethics to that when it is so easy to turn off parental guidance by the young computer savvy? How does an AI react to a human giving a racist comment when asking a question? Tell him or her not to say that or just go to the question actually asked? Does the AI then evaluate the kind of person it is talking to and whether it should be reported or considered as a private conversation? As a generalisation, the responses could be programmed in with some latitude on privacy until it’s released by the person involved publicly. Note, though, none of this is based on ethics or personal or public safety. The AI will obey an instruction without having to know what it means.

Let’s look at something that is a current problem. Should the response be based on looking at taboo words or their context and how does it understand the latter? We’ve seen or heard of the results in social media where even if a name has an apparently taboo word in a surname it is already redlit as with Hancock, Cockfosters, etc. Only the Illuminati knows how an AI would react to obscure swear words or parts of the anatomy or racial slurs and that’s only in language that isn’t supposed to be offensive. To the current AI, its just words without it recognising context and meaning.

There’s so many different ways we can go from here. If we were to give an AI legal skills so it stays within the laws of any country then you would have to give it them all. Now I can’t speak for other countries, but I do know the UK changing over the decades, some laws have become obsolete and, although they haven’t been recanted, they are ignored. I doubt an AI would know this but would face some contradictions and question what is really ‘legal’. It might ensure many countries looking at all their laws and tidy them up but it takes time to repeal laws than just ignoring them. There’s a distinct possibility that some form of AI will scan the laws and look for any that have not been used in recent decades. Humans are as much a problem as AIs here.

This still does not show an AI can select between good and bad people, let alone anyone who is just curious and want information. I doubt if even a lie detector would be able to tell the difference. Any information can be turned into malicious intent, even for a street map if we think of the Novacek deaths. Asking about specific ingredients used in bomb making should trigger an alert even if only one is used for fertiliser. Even so, this also has the hallmarks of a nanny state controlled by AIs than free will or humans, subject to how much control we leave to these AI algorithms. One thing that should always be remembered, an AI is only your friend because it is programmed that way not because it really is. We haven’t even got that way yet.

Oddly, the second two laws are more like conditions for the first law to function. They limit what the robot can do which as I’ve pointed out it is ‘knowingly’ hurt a human and the third consideration would also apply. How would a robot know it was injuring itself or being damaged and how far could it let things go before it knows it is destroying itself?

How to program such protocols is tough because if the robot was AI competent we would still have to explain to it what it can and can’t do. If the AI robot assesses that any action it does could result in injury to a human, it might automatically decide to shutdown. It certainly should have that as an option if there is too much contradiction. It would take a clever human to phrase an order that would not sound like an order to hurt anyone.

To have an AI robot analyse the question before committing itself wouldn’t be a problem as it can certainly work faster than we could. Equally, we would expect it to ask questions when it needs qualification. In that case, it would be the human questioner giving an accurate question. If an AI or AI robot was to be given any autonomy, it would have to be in a human free environment with precise instructions as to its activities.

That being the case, maybe we ought to work out from what function the AI is supposed to be doing and how its protection of humans would work and look for the pitfalls. Take medicine. How can an AI do the work of a doctor? Placating a patient who is afraid of needles and a needed inoculation and saying, ‘This won’t hurt’ when there is likely to be an ouch reaction. Equally, telling a patient that a medicine won’t taste nasty when it surely will. Quite how an AI can be trained in the calm patience and occasional white lie to treat a patient when there are some elements of pain is a key problem. There are some tasks that will need a lot of programming or at least giving a lot of options to choose from that it can work though or the AI would be sedating every patient it is working on.

So how does an AI differentiate level of hurt in a human. Certainly the AIs that developed racist remarks from human conversations couldn’t tell the difference and you would have thought that could at least have been employed in its programming.

With mostly autonomous cars driving people around, there has been a lot of speculation on how they are supposed to react to situations where either the passengers or pedestrians are killed as if there is only a two-choice system. I mean, none of the examples I’ve read have never considered the car might just stop or at least go up an embankment if it couldn’t break. In terms of car safety, I doubt if they would exceed the known speed limit or the right time to overtake. If anything, if it could speak, an AI controlled car would deem human-driven cars as the road hazards.

The use of AI is at least considering areas of human safety is being treated more seriously than in Science Fiction. Although, to be fair, though, most stories are set in a future where the bugs in the programming have already been sorted out.

Looking at the list of what AI is employed in already, I still think it’s an over-used word, In things like manufacture and medical, it’s more a case of giving the software perimeters and let it go through the procedures as it can be faster and more accurate than humans without getting bored or tired. It reduces the number of results it needs to show to humans to the ones that should be of concern and new tests can be introduced as needed. But is it genuine AI? Can it be independently inventive or come up with different results or solutions to something it finds? I haven’t heard that happening yet, so its probably just clever programming than actual AI using the improved visual hardware.

I’m definitely less sure about using it in creative activities. Looking at the examples in Google, it can’t give accurate information with photo resources, even when enough information is given for an accurate assessment. Would you trust it on text unless you knew the answers already? Adults might realise this but you can bet school kids won’t.

We still need to define what is true AI because it is bandied about for sorts of activities. As pointed out above, with certain medical photograph examination, it is down to elaborate programming than independent thinking. A lot of activities can be done simply by covering all the options. When I was working back in the 1980s and solely in BASIC, I covered all the options and some of the people were convinced the software had a mind of its own because I had anticipated all the inputs and what to do under their circumstances. It’s not that difficult to make any software look more elaborate than it is.

For an AI to create its own code, would be more a question of filling in gaps in existing code. The equivalent of the options line with the data stored in a database. Easy to add and remove as needed but its not independent thought. The software would only be capable of matching criteria without understanding why. It wouldn’t be able to match different things in its database and create a viable different solution. If enough options to what is presented to it is available then it can find a match. The so-called AI used in search engines works on that principle but it is really only doing a word search which is why JPGs that are incorrect keep popping up when you want to find a match. Applying the same to information then I certainly wouldn’t trust it for total accuracy when it gives multiple different answers.

Even so, its not as though it’s an independent Artificial Intelligence. I doubt if we’re even close to making something that could parallel the HAL 9000.

Whatever the options, don’t just depend on an AI as the font of all knowledge. A simple person you want to see photographs of and it gives people that are not them, shows how hit and miss it is. It is better to have a good general knowledge and use it for confirmation. Although an AI could be configured not to deliberately kill someone, it would not be able to stop you using its information for you do it for yourself which is much more worrying.

You would think Science Fiction writers and fans would all be in favour of AI but I tend to think we’re more aware of its dangers. Even with Asimov’s interference stopping robots going on the rampage with no logical reason, it’s the ones with logical reasoning that are far more dangerous as witnessed by the likes of Colossus and Skynet. To give any AI total autonomy without some form of kill-switch or several means where possible would be foolish if things got out of hand.

Currently, the promotion of AI is greater than the results. Even the ones currently in use have their own limitations simply because they don’t have perimeters set correctly. I’ve tended to steer away from the likes of Alexis simply because they would limit what I am looking for than allow me to widen my knowledge. The fact that companies building AI-type software wanting to get it on the market as quickly as possible and no doubt interested in seeing how it copes with humans does make me thinking they are rushing.

© GF Willmetts 2025

All rights reserved

Ask before borrowing

UncleGeoff

Geoff Willmetts has been editor at SFCrowsnest for some 21 plus years now, showing a versatility and knowledge in not only Science Fiction, but also the sciences and arts, all of which has been displayed here through editorials, reviews, articles and stories. With the latter, he has been running a short story series under the title of ‘Psi-Kicks’ If you want to contribute to SFCrowsnest, read the guidelines and show him what you can do. If it isn’t usable, he spends as much time telling you what the problems is as he would with material he accepts. This is largely how he got called an Uncle, as in Dutch Uncle. He’s not actually Dutch but hails from the west country in the UK.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.