“Man has, as it were, become a kind of prosthetic God. When he puts on all his auxiliary organs, he is truly magnificent; but those organs have not grown on him and they still give him much trouble at times.”
– Sigmund Freud, Civilization and Its Discontents
This statement has probably never rang more true than today. Our ever rapidly increasing computing power and constantly developing computer science, particularly in the field of artificial intelligence, brings into light a slew of philosophical, ethical and practical matters which demand more attention than our attention spans can afford. It’s therefore not surprising that we experience a sense of malaise when faced with the tremendous implications of our own creative powers as we are contemplating the invention of an artificial Adam, while the metaphysical foundation for information ethics is still somewhat of a terra incognita.
The Ethics of Artificial Intelligence, particularly Roboetics and Machine Ethics, which intersects applied ethics with robotics is the main effort with the aim of defining and regulating the moral behavior of humans in relation to artificially intelligent agents and the moral behavior of artificial moral agents.
Artificial Intelligence itself (A.I.) has been defined, rather tautologically, but pretty straight forward, as “the subfield of Computer Science devoted to developing programs that enable computers to display behavior that can (broadly) be characterized as intelligent.” – https://plato.stanford.edu/entries/logic-ai/
Such a computer program can successfully replace a human agent already for rather menial tasks and can learn from human behavior in situations that needed to be escalated to a human agent.
As A.I. continues to develop, it will eventually be capable of replacing any human agent in no matter the position. Without being a cause for alarm ipso facto, this direction raises many challenges, from reigniting discussions regarding a universal income to the existential risk from advanced A.I., there are many aspects that are of interest not least of which is the ethical perspective.
It has been argued that A.I. technology should not be used to replace people in positions that require respect and care as to not infringe on human dignity. For the customer care industry, which aims to assist customers in making cost effective and correct decisions and to ensure their satisfaction, the development of A.I. tools like chat bots, IVRs and speech recognition software, breeds a pragmatically focused discussion on the nature of morality in relation to A.I.
When putting in the balance the interest of lowering costs with staff and the interest of providing the best customer care in an ever more competitive market, a perhaps less evident opportunity cost should be taken into account which is brought on by the distinction between deciding and choosing.
Decision making, as a computational activity, can ultimately be programmed, while choice is the result of judgment and less dependent on calculation.
Comprehensive human judgment can account for what still remains an ineffable part of human nature and is able to include non-mathematical factors such as emotions.
We require empathy and the capacity to establish an authentic rapport from a customer service representative; as customer service has become part of the product itself, we have grown to expect not only an expeditious resolution, but also care, consideration and a certain standard of empathy.
If used to replace human agents in positions that require the capacity for choice, A.I. can represent a threat to human dignity, insofar as A.I. is not yet entirely capable to take into account some of the more nuanced psychological and emotional aspects of human interaction, nor would we be fully willing to accept decisions calculated and imposed on us by unempathetic machines.
We have always looked for familiarity in order to find comfort, and the Otherness of an intelligence that is outside the domain of humanity can be a source of discomfort. Intelligence, the capacity to make informed and independent decisions, was an inherently human attribute and one of the fundaments for the idea of human dignity.
“For we can certainly conceive of a machine so constructed that it utters words (…). But it is not conceivable that such a machine should produce different arrangements of words so as to give an appropriately meaningful answer to whatever is said in its presence, as the dullest of men can do.”
– René Descartes in Discourse on the Method (Translation by Robert Stoothoff)
This idea is challenged in the Information age.
Our notion of human dignity has evolved and changed, from the cut and dry classical Roman idea of dignitas hominis (which meant status, honor and respect) to the more nuanced meaning of the phrase today, which has increasingly passed into vernacular use in a variety of very different contexts and circumstances. The interpretation of dignity in different frameworks, from Kantian categorical imperatives to religious doctrines and traditions, the concept itself remains opaque.
For instance, one can also argue that A.I. can represent a guarantee of human dignity as soon as we consider the matter of equal treatment. An automated agent that takes empirical impartial decisions, devoid of bias and reliant on a set of conditions which preclude the possibility for discrimination can prove to be a reliant guardian of human rights. A chat bot or a speech recognition software will be by far less likely to discriminate based on race, gender or social origin than a human agent would.
From a consequentialist standpoint, the use of A.I. technologies in other areas, such as health care, can prove a desirable choice. The good of the many, however, does not bring an easy answer to questions such as “can a machine provide the same emotional care as a human can?” or “is human dignity affected by machines like the PARO Therapeutic Robot, used in providing care to dementia patients?”
As far as the customer care industry is concerned, the question of interest would be whether to choose or decide and perhaps a wise answer is to find the golden mean.