Instilling Values into AI
Hi Ethics Nutters,
Good morning.
There are two main concerns of mine today. The first is unregulated Genetic Engineering by foreign states. The second, is the values that AI will have and whose values they are that are instilled. A great article I’ve just found is by Ibo van de Poel titled, “Embedding Values in Artificial Intelligence (AI) Systems”.
Link: https://link.springer.com/article/10.1007/s11023-020-09537-4
This is not something that should be taken lightly and it speaks volumes to our understanding of ethics in how it will be applied to AI systems.
Even claims made in the article are frought with difficulty, such as claims that someone can value something that has no value. Whereas, many might say and myself included, that a claim such as something valued can actually have no value, is itself an imposition of a normative claim. It is the person who values an item or thing that gives it its value. An item or thing does not objectively hold value, which those claiming it either has or does not have somehow know its true nature better than others.
If one person or group does not value and item but one person does, that item has value. The individual has given it its value and to say it really has no value is simply an imposition of a majority, or worse educated minority. If we think value and ethics are objective, although I believe less and less do today, then we can create and instill a moral algorithm for AI that can be used by everyone. However, because there is a growing realization that value and ethics are actually a very subjective element of humanity, this is a radical hurdle in the implementation of AI across cultures, sub-cultures, and perhaps even within a homogenous culture that allows for differences of values and ethics within it, such as a liberalized system.
There is no way that an AI system widely used and not programmed by the individual to some extent, can escap a authoritarian application of morality. What is meant by programmed by the individual would merely be the selection options prior to a purchased systems start-up. This would allow for large normative legal decision to be implemented although allowing for the very necessary individualized nuanced details of what the person as the space to think is right and wrong.
The idea that we can program morality into AI for all, rests on the fallacy that morality is objective. It is not.
Thanks,
Andrew