For the birth of an ethical science

1 14
Avatar for ilnegro
3 years ago

Ethics, like all philosophical-humanistic disciplines, was born and evolved with a non-scientific approach.

Some of these disciplines have long been trying to transform themselves into science, not without great difficulty.

To do this we try to rely on mathematics, statistics, game theory, etc.

Often the results are disappointing, in my opinion because we often look for a shortcut to science: using tools (theories, mathematical analyzes, etc.) considered scientific, applying them to theories or analyzing facts, without having first defined the basics of science at which you want to apply, obtaining very random and non-scientific results.

First of all, therefore, the bases of a new science should be defined, only later it will be possible, on those bases, to apply methods and verify theories.

It is difficult for the humanities to apply the same scientific basis that "works" in disciplines such as mathematics, physics, etc.

As for ethics, I don't think anyone has ever tried it before.

We talk, of course, for a long time about the ethics of science, meaning that scientific research should be confined to a certain ethics.

But the basic problem is to first create an ethical science, which establishes in a scientific way what is ethically acceptable and what is not, based on what criteria, and what is the area in which all this applies.

Ethics, to date, has always been based on religious convictions, on established practices, on common feeling, on the opinions of various thinkers, philosophers, scientists, etc., all of which, obviously, has nothing scientific.

 

But is the problem of turning ethics into an ethical science a real problem?

Unfortunately or, fortunately, it is a real and also very pressing problem.

The ethical behavior of the individual, within our societies, is regulated through laws, customs, religious beliefs and the interpretation regarding the fact that the individual has or has not behaved ethically occurs through the legal systems, the approval or disapproval of the majority of people, this usually takes place ex-post.

In other words, each individual is proposed a set of rules, whether they are laws or customs or religious dictations, which define, in a rather vague and interpretable way, how they should behave.

The company then intervenes to sanction unethical behavior only after it has occurred: the individual decides independently which options to choose and whether or not to comply with a certain code of ethics.

A human being has the ability to make decisions even if he has no ethical code.

In the near future (or perhaps it would be better to say now?), There will be robotic systems that will have to make decisions (also) based on ethical principles and rules that, however, due to the very nature of the software that must make these decisions, will have to submit to strictly scientific criteria.

Robots must first have a code of ethics on which to make decisions, and only then can they take them.

Otherwise they would literally not know what to do.

This ethics, of which the robots will have to make use of force of things, today does not exist yet: this could turn out to be a huge problem.

On a 'human' level there is no shared ethics, every state, religion, up to each individual person, offer different ethical nuances.

Should a scientific definition of ethics take into account differences between nation and nation, between religion and religion, between individuals?

Who will decide how a robot should behave in the face of a choice that has an ethical component?

The company that produced it?

Will it have to follow rules established differently by each individual nation?

Will it have to follow rules decided by every single human being who has the possession?

In the latter case, if a robot carries out its activities within a community (group, company, family, etc.), who decides what behavior, in certain situations is ethical or not?

Those who love science fiction will already be thinking, "There are the 3 (4) laws of Asimov!"

The ethics of Asimov, that is the behavioral code within which the robots must move in the universe devised by Asimov, are 'human' ethical principles: they have the assumption that the robot is a sentient being, capable of make specific ethical assessments based on general principles.

Our real world is about to be invaded by robots that are not at all sentient or that are only in a very little 'human' way and much less able to abstraction and contextualization, define abstract ethical principles for a computerized press, a A car with a robotic guide or an automatic lawnmower makes no sense.

 

The question is complex and not easy to solve, the implications are also stringent from an economic point of view, given that, as long as a machine still acts under the supervision of a human being, it is taken for granted that the consequences of unethical behavior of the machine are the responsibility of the human being himself.

The moment a machine makes decisions without the supervision of a human being, whose responsibilities will it be?

Of the company that produced it?

Of the person who bought it and then put it into operation?

Can this person accept to take on the actions of a machine that follows an ethic that is not under its direct control, but decided by the company that produces it and / or by the nation that authorized its use?

 

Many examples can be given, just to clarify the pragmatic nature that ethical science must assume, on some robots already in use or soon to be marketed.

 

Let us assume that it is decided that each individual state can define its own ethics:

Robotic lawnmowers can be sold in countries with a Buddhist ethic, since, without a sensor that recognizes the presence of animals on the cutting line and a mechanical arm that moves them to a safe area, the machine, of its own initiative, cutting the grass, could it kill worms and insects present on its path?

Can a car that does not have a sensor that recognizes the presence of a woman alone on board and refuses to start up be able to circulate in certain Arab countries?

We hypothesize that a robotic-guided car has its own ethical evaluation scheme that makes it possible to decide whether to brake suddenly to avoid investing those who cross the road or continue taking into account the possible damage to people in the passenger compartment.

Suppose that to cross the street is: a chicken, a cat, a dog, a cow, a human being.

The acceptable risk for those in the passenger compartment should depend on the value that each nation or individual, based on his personal beliefs, would give on the possible death of the invested subject.

A car in a country or with a Hindu, Catholic, Shinto owner could have different scales of values.

Can cars, with their own code of ethics, circulate with schemes that are not congruent in other countries?

 

Let us admit that the manufacturer's code of conduct is to decide:

Will it take into account the various sensitivities of the various countries and religions?

Will the company be forced to produce an infinite number of behavioral models?

Will it produce a single model and then have to give up marketing it in countries that have ethical codes so different that they do not accept the behavior of their robots?

If ethics is decided by the manufacturer, in the event of damage to third parties, who will bear the consequences of ethically questionable choices?

Could the owner rely on the manufacturer claiming not to share the robot's choice and not having been able to influence the decisions?

 

We admit that the buyer is responsible for deciding the code of ethics:

Will companies be able to produce robots with codes that are so flexible to adapt to any customer request?

Any customer request would be acceptable and, if not, who would decide if the code of ethics decided by the individual is acceptable to the community?

For example, left to the choice of the individual, the decision criterion of a robotic car could be that in a doubtful case, the safety of the people on board the vehicle is to be considered always predominant, therefore, even in the case of a possible scratch to a hand of the passenger of the car compared to the death of a schoolchild of 30 children is to be preferred however the choice that avoids the scratch and makes a massacre.

In all likelihood, both the companies and the buyers, in order not to have to take on an imponderable series of consequences, will try in some way to obtain rules, at least at the level of states, if not even at a supranational level, but who will will it be responsible for defining the ethical science that is necessary to define which behaviors are ethical and which are not, in a specific and specific way and not in a generic and random way?

 

The difference is not small: it is abysmal.

Other examples.

General ethical standard: Do not kill.

Ethical science: Does the car have to brake having a 40% chance of ending up against a wall by killing anyone in the car or trying to avoid the obstacle with a 60% chance of killing a pedestrian? The criteria must be established for a well-defined algorithm that allows the robot to make an ethical choice. And it's not easy at all.

 

We talked a lot about cars, which are also the closest type of robotization and at the same time it has enormous ethical aspects to deal with.

Let's take other, even more complex examples.

It is often assumed that a car with a robotic guide respects 100% of the road code, stops at stops, does not exceed speed limits, etc. etc.

Is this behavior always ethically acceptable?

The answer is not obvious, far from it.

Let us assume that a robot-driven car approaches an intersection traveling at a code speed and, with the green light, before engaging the intersection, see a car driven by a human being who is no longer able to stop at the red light and, therefore, will clash with the one with a robotic guide with serious consequences for all human beings aboard the two vehicles.

Let us admit that the robotic car cannot brake, except by involving the cars that follow, nor steer to the right or left.

In this specific case, it would be useful for the car to be free to decide, quite exceptionally, to accelerate even if this will obviously lead to non-compliance with speed limits.

We therefore see that even a norm that would seem absolutely logical, such as full compliance with the code of the road, could prove insidious if the different weights of behavior, even unethical, in general but appropriate, are not correctly assessed in particular cases. .

 

Other examples may be related to the domotic defense of one's home.

In which cases a home automation system could decide to fire, activate a potentially dangerous electric shock, etc. to avoid a break-in in the absence of the owners? Ethically, is it acceptable for a robot to kill a human being even if it is a thief?

Who would be responsible for such an action?

What and who would be responsible for a possible error, that is, for serious damage or the death of someone trying to enter a private property for reasons other than theft, perhaps even to protect the owners and / or their property?

 

In my opinion these rather complex problems (mine are just examples, obviously they are a huge amount, the ethical problems related to robotization) could be a great help for humanity.

The first positive consequence is that, having to formalize what is ethically acceptable and what is not for a robot, willy-nilly, like human beings, we will have to come to terms with an ethic to which we say we inspire us highly aleatory and which, hiding behind principles very high, it then materializes in actions that are often very far from the 'theoretical' ethics behind which they hide.

The second consequence could be that of defining a common ethics, more objective and shared, which finally brings humanity to the overcoming of religious ethical codes and customs, taken as good because atavistic and not always because justified and justifiable.

Resources

  1. https://steempeak.com/ita/@ilnegro/per-la-nascita-di-una-scienza-etica

1
$ 0.00
Avatar for ilnegro
3 years ago

Comments

Thanks for such a well written and well described article. I was glued to it. Looking forward for more.

$ 0.00
3 years ago