Wednesday, 8 September 2010

How Much Ethics Do We Really Need?

Even if you're not a fan of science fiction, you have probably heard of Isaac Asimov's Three Laws of Robotics. All Asimov's robots have them instilled into their positronic brains in a way that makes them immutable and unbreakable (well, most of the time anyway). They serve as safeguards to humanity - mostly because humanity tends to overreact when it feels threatened intellectually.

This is not a post about robots, as interesting topic as they may be (I may fill that particular gap at a later date). This post is about humanity (and more), and how it may be possible to replace all the fuzzy and complex "rules" of ethics. Nobody seems to be able to agree upon, either - and they've been at it for millennia.

So, hear me out. I might just make sense...

If you haven't encountered the Three Laws yet (e.g. you lived in a cave for the last 50 years or so), I'll quote them for you here:
  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. A robot must obey any orders given to it by human beings, except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
Sometimes, they are augmented by the so called Zeroth law:
  • A robot may not harm humanity, or, by inaction, allow humanity to come to harm.
The other laws are changed accordingly (i.e. the first gets, "except where it would conflict with the zeroth law", etc). Now, just for argument's sake, let's change "robot", above, into "human". I've added the zeroth law, too, and to help you also bolded the changes
  1. A human may not harm humanity, or, by inaction, allow humanity to come to harm.
  2. A human may not injure a human being or, through inaction, allow a human being to come to harm, except where such (in)action would conflict with the Zeroth Law..
  3. A human must obey any orders given to him by human beings, except where such orders would conflict with the Second Law.
  4. A human must protect his own existence as long as such protection does not conflict with the First, Second,  or Third Laws.
Looks good? Makes sense? I think so.

Looking at the above recently, and having been thinking about the original Laws for many a year (reading any of the Asimov's robot stories or books, you end up doing little else), I have come to think that this may be the great distillation of ethics for all of humanity (and possibly all sentient beings).

There is one problem with the list above, and it's the Third Law. A free, sentient being is not really meant to blindly obey others just because they felt like ordering him (or it) around. But, on the other hand, we do comply with reasonable requests, grant favours, and generally cooperate when it's either in our own interest, or the interest of a greater community we associate with. Still, the Third Law, as stated above, needs work. Probably serious work.

An obvious way out of this problem is removing the Third Law completely:
  1. A human may not harm humanity, or, by inaction, allow humanity to come to harm.
  2. A human may not injure a human being or, through inaction, allow a human being to come to harm, except where such (in)action would conflict with the Zeroth Law..
  3. A human must protect his own existence as long as such protection does not conflict with the First or Second Laws.
From where I'm sitting, the above looks about right for a universal cheat sheet of ethics.

I still think we could insert back the Third Law, to cater for all sorts of cooperative activities. I just fear that it'd need too much legalese, and as such become unwieldy, and thus useless as a quick, hard and fast rule. Plus, less rules are always better than more - especially when they're as clear as the ones above.

One more observation from where I'm sitting, to expand on the parenthetical remark from a few paragraphs above. I see no reason not to replace "human" and "humanity" with even broader terms, "sentient being" and whatever term exists for a collection of such beings (does one exist at all?). It doesn't even matter if such beings, apart from humans exist or not, but if they do, they surely have to come under the same ethical umbrella. Even if the come with a positronic brain that we have designed ourselves. Just as long as we took care to instil the laws above into it before they start getting ideas of their own...