n the adolescence of our species (i.e., right now), we have created what is essentially another lifeform to aid us and ease us from our human burdens- Artificial Intelligence - something that not everyone agrees was the best move for humanity - and yet the progress happens inevitably!. This article is about exploring whether A.I. will be capable of morally fulfilling the duties that it is destined/being designed to do. As A.I. now gains the ability to control almost every step of our daily lives, it is important that our actions are done by a safe, and more importantly, a morally justified system, one that is worthy of our trust.
Firstly, it is important that we consider the circumstances why the credibility of machines is coming into question in the first place. After all, machines can flawlessly execute any of the instructions without any margin of error, right? The problem here is that the instructions to be given to the machines are subject to the person designing them! Let's consider a scenario - If a self-driving car can either crash into a young lady, in an effort to save an elderly passenger? We get different answers depending on whom we ask. MIT’s Moral Machine illustrates this beautifully. I got different answers from many of my friends, but does that mean I’m wrong? Well, not necessarily! Because there is no right answer, and there will never be one - this is the heart of the problem with ethics, philosophies are NEVER universal.
Why Does A.I. Need to be Moral?
One very useful observation from the above is the reason why we want to moralize computers - To make “better” decisions; something that is only a small part of the multitude of reasons why humanity prioritizes morality! This means that morality in A.I.s and in humans serves two distinct and mostly unrelated purposes. Simply put, A.I.s have no need to do stuff like secure societies or unite people through moral decisions. They only have to satisfy and replicate the moral beliefs of others. This one angle that I didn’t find any material on, and also one angle of the case that none of the people I talked to (mostly schoolmates) considered. Could a different sort of moral instruction-set be created keeping this in mind? Let's leave that to the experts of moral philosophy and computer sciences.
Can A.I. be Moral?
Yes, sort of. Projects like the aforementioned MIT Moral Machine and the works of people like Vincent Conitzer are at work to create a moral instruction set for A.I.s. But is this going to satisfy everyone? Absolutely not. Was it created with the aim to satisfy everyone? Also absolutely not. You see, the way the “morality” is created is by making real people make a decision first, and then recognizing the patterns for A.I. to understand. It's not true “morality”, but it's a replica of public opinion. Morality is subjective, and a variety of roles influence and change it. A billionaire’s morality might be different from that of a minimum wage worker’s; a man's may be different from a woman’s, a nation’s from another’s, a liberal’s from a conservative’s, the list goes on! The real question to ask here is Which Morality? And the system used by Conitzer is a useful oversimplification from that mind-bending complexity. A computer, right now, doesn’t act morally - it only acts by predicting what a human would do in its scenario!
Also important to consider is that morality - something neither timeless nor universal - will never be constant(because the factors influencing morality are constantly evolving)! A useful example is how just a century ago, most of humanity approved of slave-owning, colonialism, and sexism, to name a few. The point is that creating a comprehensive moral compass for a machine would be inefficient- because it needs to evolve as per changing human opinions!
So Does A.I. need to be Moral Right Now?
Think: what do A.I.s do now? A.I. enables humanity through a series of commands to enhance daily activities. This means that only a small set of moral philosophy will be required by A.I. in the present day. In other simpler words, A.I. can’t make use of an entire library when all it needs is a few pages! An actual “moral” A.I. is something that will be necessary only when we humans can develop A.I. enabled machines that are human-like, and perform human-like functions… and we’re a long way from there (even Conitzer agrees)! What we need right now is a system that is able to quickly and accurately replicate human actions - any more than that would only be creating complexity out of a simple necessity!