A question of ethics: killing an AI - is it murder?

As humankind gets increasingly closer to birthing a true AI, if an AI is killed is it murder?

A question of ethics: killing an AI - is it murder?

In my country it's illegal to kill another person (in most circumstances), a crime called murder.  England also doesn't have the death penalty anymore for any crimes, so even if someone is of a very unsavoury character the most that can be done is incarceration.  All of that said, the world has seen many films over the years about man vs machine and man vs artificial intelligence (AI), so it begs the question "killing an AI - is it murder?".  Here are my thoughts.

(Note that I'm not using the legal definition of murder which has to do with one person intending to cause death or serious injury to another unlawfully - essentially stopping the body from living.)

First off, we need to define what I mean by AI.  I'm not talking your digital assistant (Google, Alexa et al.) even though companies call those AI technologies.  I also don't count machine learning algorithms that look at patterns to determine outcomes or algorithms that can write prose that convincingly reads like it was written by a human.  By AI I'm referring to a machine that has its own personality, the ability to hold convincing conversation (thus passes the Turing test) and make its own decision.  To draw a comparison from sci-fi: by AI I mean Andromeda not the Enterprise's computer [1].  The Computer performs tasks on instruction based on algorithms whereas Andromeda (and Rommie) act on their own decisions.  Andromeda type AIs can go rogue (a la the AI for the Balance of Judgment) whereas the Enterprise's computer could only break.  To my mind, an AI has a consciousness [2].

The Enterprise NCC-1701D from Star Trek and the Andromeda avatar from Andromeda [3].
The Enterprise NCC-1701D from Star Trek and the Andromeda avatar from Andromeda [3].

What's a consciousness?

My usual baseline is around whether or not an entity has a consciousness, which for me means the entity is capable of independent thought.  The Google Assistant doesn't have a consciousness as it just follows programming in the same way that my projects do, albeit Google Assistant is much more advanced.  Because there's a consciousness there's a personality.  There's also free will, something Alexa doesn't have.  If there's output from the AI that's not pre-determined by a set of human programmable rules then I'd say you were looking at a consciousness.  (As in the AI greets me or performs an action without any form of prompting, so it's not caused by my geographical location or any form of trigger I've defined.)  A consciousness is also self-aware.

At the risk of entering the euthanasia debate, which we'll be very briefly moving through: if I was in a vegetative state and there was no chance of my mind ever functioning again I wouldn't consider it murder to stop my body from living.  My body is a vessel for my consciousness and if my consciousness is no longer present then I, Jonathan, am no longer alive.  My body is, but I'm not.  That's how I'd define my existence, and I wouldn't enforce that on anyone else, but hopefully that clears up what I mean.

Erasing a computer

If my computing device gets infected with malware or become corrupt I erase it and start again.  This is fairly common practice in IT and is how systems can be kept in a known good state.  My data (photos, projects etc.) aren't lost, as those get restored from backup, and the Operating System isn't something I particularly care about because there's no consciousness there to be concerned with.  The Operating System simply gets reinstalled from known good media.  There's no conceivable murder there.

The AI goes rogue

As is often the case in sci-fi, the AI, possessing its consciousness and being self-aware, could "go rogue" in much the same way as a human could.  A human can act against its peers and society, attacking others or committing acts of sabotage.  If an AI goes rogue it could do the same.  For a human they'd be sanctioned: a loss of privileges (such as a curfew) or being imprisoned but that's more difficult to do to an AI that's a digital entity.  The AI could have multiple copies, spread across multiple bodies and locations - a distinct advantage over a human that possesses a single body per consciousness (although arguably multiple consciousnesses could occupy the same body as seen in multiple-personality conditions).  Imprisonment of the AI is thus not feasible as there's no way to know you'd imprisoned it all or to guarantee the AI wouldn't transfer itself to another container (i.e. escape).  If you've definitely constrained the whole AI you could cut off its network connections, restricting it to one area, but how would you know it was safe to do that?

Skynet from Terminator went rogue and performed a pre-emptive strike against humanity - what's the correct action?

Killing an AI, is it murder?

To kill an AI would be to erase its consciousness from everywhere, ending its free thought, the same as terminating a human body would destroy a human's consciousness.  If I kill a human that's murder in most circumstances.  Therefore, is erasing an AI murder?  Would your answer differ if that AI had gone rogue and was actively killing humans?  What about if it was killing other AI?  If an AI kills another AI is it murder?  A lot of questions that I just don't have an answer to.

When it comes to biological organisms we have a generally accepted hierarchy.  If I'm walking along and a dog attacks me it's generally accepted that I can defend myself, killing the dog if necessary (what's deemed "reasonable force").  If I killed another human that attacked me there'd be more scrutiny to determine if my killing of a human was a reasonable response (more scrutiny would be applied to me than in the scenario that I killed a dog).  The first AI will be created by a human, so as its creators does that place the AI below us in that hierarchy?  Surely we have the right to kill it in that case because we are "more important".

Importance and hierarchy tend to be graded based on perceived intelligence, hence humans get considered more important than animals.  Similarly in the event of a disaster and needing to repopulate humanity certain people get preferred based on their skills (see any movie that involves evacuating the planet in the event of an impending disaster), so even within humanity we have a hierarchy.  Given an AI could have the same (or greater) intelligence than us, where does that place them in the ranking?  On the one hand the human created the AI, clearly more capable, on the other hand the AI has equivalent intelligence to its human creator, can hold a conversation, is capable of independent thought and can make its own choices.

That's a lot to think about, and in all honesty I don't have an answer.  By the time we've created an equivalent consciousness I'd argue yes, it was murder, unless there was a mitigating circumstance (the AI declared war).  The real challenge comes in applying sanctions to the AI - if it can't be incarcerated, is killing / erasing the AI an appropriate response.

I don't know.


Banner image an ethics word cloud.

[1] I'm referring to the one in The Original Series or The Next Generation.  I've not seen Discoveries or some of the newer releases.

[2] A conciousness differs from a soul for the purposes of this post.  I'm not considering souls.

[3] Images copyright their respective authors.