Artificial intelligence is a symbolic representation of world rather than the world which is objectively created. Once encoded, stored, accessed, compared and transformed a machine’s behaviour is what humans find desirable according to society’s benchmark of morality. It is concerned with representation and construction to be more ethical than humans. But how can artificial intelligence recognise progress in civilization’s ethics as opposed to instability in society? Would artificial intelligence differ in different cultural environments? it raises some ethical questions.
Moral status has two criteria: Sapience and Sentience. An organism or entity has sapience or wisdom when it acts with proper judgement. Sentience refers to the ability to perceive or feel. Humans deal with some obvious and some subtle moral decisions relying on learning from past experiences. Sometimes they use bias as a productive means to make decisions in any particular case. Human beings are often mistaken because of the unstructured nature of a problem. Nevertheless, humans also have enough flexibility to correct errors such as steps to prevent the use of atom bomb and its proliferation. Designing thinking machines raise some ethical issues about accountability, virtuousness, lucidity, auditability, predictability, uncertainty and satisfaction that machines are morally equivalent to humans. Or they can do no harm to humans.
Artificial intelligence is a symbolic representation of world rather than the world which is objectively created. Once encoded, stored, accessed, compared and transformed a machine’s behaviour is what humans find desirable according to society’s benchmark of morality. It is concerned with representation and construction to be more ethical than humans. But how can artificial intelligence recognise progress in civilization’s ethics as opposed to instability in society? Would artificial intelligence differ in different cultural environments?
Machine intelligence depends on the developer, coder or computer programmer to be set up. What if the machine reflects the prejudices or sexist or racist views of the programmer? Or what happens in case a user decides to use the program for their benefit and not the society. For example, a small group of people turned Microsoft’s bot called “Tay” into a holocaust denying racist after feeding offensive information. What happens if artificial intelligence spots pattern of discrimination and yet is unable to do anything to ease the situation? Also, it is hard to control the information once released without confidentiality or privacy agreement, and programmed intelligence will use the data if it is supposed to use it. It does not take into consideration the righteousness of the information.
There is an increase in the functioning capabilities of machines with advancement in technology. Automated systems have better perception, faster reflexes, cheaper and slowed down by human inputs. Say thorough analysis by machines produces a proposal that is at odds with doctor’s recommendation. Or a lawyer decides to perpetuate the problem of discrimination against a female client based on an efficient prediction by machine for fear of losing the case and not bring it to light. Human intelligence added to such case may prove compromised as machine’s advice is likely to be more accurate than average. Artificial intelligence might be constituted differently from a human intellect hence it would not have emotions or have conscience experiences of any kind.
It is not possible to have dual use of technology. It is an unavoidable development that can be used by ungoverned factors. What if it is integrated into weapons of all sorts and fall into the hands of rogue states or terrorists? The degree of autonomy and the lethal ability can be managed remotely when required. Or machines counsel people how to best bypass the law rather than abide by it. It is also conceivable that machines kill on their initiative.
Also, smarter machines lead to spot tasks as inconsistencies harder because most of the time the result is good. However when machines are wrong, it can be wrong in a spectacular way with more unpredictable outcomes than humans could. Sometimes algorithms can make bizarre decisions. Such as use of Amazon’s pricing algorithm by two booksellers led to the price a book by Peter Lawrence called “The Making of a Fly” to around $ 24 million. Or the stock market crash on May 6, 2010, in which stocks experienced an extraordinarily rapid decline and recovery. Issues of responsibility are raised by these questions. How to find the person who is responsible for when a system fails? Or who is supposed to take the blame?
Normally a programmer does not explain how he created his invention because of fears of his work being redistributed or duplicated. Artificial neural networks may become more controlling and permeating as its advancement continues. It may also be impossible to understand how or even why an algorithm is making a decision for example Alpha Go combines supervised learning from human expert games and reinforcement learning from games of self-play. If a machine commits a crime should the duration of punishment be measured in objective or subjective time? If artificial intelligence feels pain should it be lessened because of subjective time? Maybe new ways will be needed for defining and distributing society.
Humans have the potential to develop feelings for artificial intelligence as they spend more time with machines or in virtual realities. In future, some may seek to have an affair with a machine as similar to sex with another person. They may start to spend more time with machines than in the company of other human beings. The lack of human contact may lead to mental health problems. What happens when they get attached such as the main character in 2013 movie “Her” got emotionally involved with the Siri-like operating system?
86 billion neurons make the human brain. There is an effort going to upload brain to machines to free humans from the constraints of the body and be immortal such as planned by Russian media mogul, Dmitry Itskov. Every person is different. What happens to the copy of a mind? Will they have the same functionality and same conscious? Will it be the same person? Will it feel any pain?
What about the rights for the machines? Laws about living things such as pets have to be applied to artificial intelligence to protect them from abuse. Are they entitled to the right to be serviced, repair, power or sufficient memory?
Intelligence explosion is conceivable due to machine’s ability to redesign itself or create a successor system that is even more intelligent. What is more, it is capable of inventing capabilities more quickly than humans. Changes to intellectual architecture can lead a situation that no human mind would be able to solve. What happens when the machines invent something or makes money on the stock market or produces a research paper. Who will own the right? Will it be owned by machines or by the programmer?
Artificial intelligence is capable of rapid reproduction if there is enough access to hardware and power. Moreover, the copy will be able to duplicate itself immediately. Do the rules of reproduction that apply to humans apply to artificial intelligence as well? Humans have the freedom to decide to reproduce and in the case when parents are unable to provide for children, society steps in to take care. What will happen if artificial intelligence is unable to give to its copy or it runs out of resources such as electricity? Should society step in to cut its ability to reproduce or provide resources so that the next generation does not die?
To sum up, the ethical issues raised are different from identifiable human circumstances. They need creative, out of the box thinking answered on a global scale collectively by every section of the society and not individually.