The past, our best and perhaps the only teacher, does not bode well for the ability of humanity in general and of nations, institutions and the scientific community to face the need for the birth of an ethical science.
In the recent past we have already suffered the consequences of a first invasion of robots, which took place almost without even having the knowledge of it, and the problem was not minimally addressed.
Since the 90s of the last century software robots, generally defined coldly with the term 'algorithms' or similar expressions, have invaded our world altering it, taking ethically questionable or pernicious decisions, bringing destruction, death, poverty all over the world, without that nobody or almost did not face the ethical problems related to the fact that certain decisions were taken by automatisms on which human interventions had and still have little or no possibility of affecting.
The fact that an 'algorithm' is not a tangible physical entity, does not have the solidity of a physical device, makes it obviously impalpable and therefore its decisions take place without anyone bothering to define an ethical area within which it can properly act.
Finance is certainly the area in which, before any other, algorithms that make automatic decisions have made their appearance.
For decades, the foreign exchange markets, stocks, bonds and derivatives, are increasingly pervasively in the hands of more and more sophisticated algorithms, which take sudden and repeated decisions, previously impossible for speed, accuracy and repetitiveness for a human being.
Scalping, arbitrage, etc. they are techniques that have taken hold because they are technically possible, only from the moment in which the financial traders have had at their disposal some algorithms, which are actually real software robots.
I say this with knowledge of the facts because I was one of the first, at least in Italy, to develop one in the early 90s of the last century.
And these algorithms are able to make decisions and act on the markets independently, without the slightest human intervention.
If, in the initial stages of these practices, disasters were very obvious, for example when all those present on the markets, following rising or falling trend of prices of equities or currency exchange, have fueled these trends by bringing the markets towards Unsustainable situations, as they were refined, the consequences became more subtle, but they became structural and contributed not a little to lead to very deep crisis situations and absolutely tangible and real consequences.
The automation, for example, devoid of any ethical plan and even, often, of any control, have led to disastrous social and human consequences in the energy and raw materials sectors.
Particularly in the food sector, billions of people have been placed 'out of the market' (which then means to reduce hunger) in the face of sometimes quite marginal gains, achieved through operations of creation of speculative 'bubbles' occurred through automatic mechanisms governed by algorithms that do not respond to any code of ethics and are not subject to any human control.
This problem, among other things, has never even really been brought to the attention, I do not say of the general public, but neither of the experts, above all of those who ask for the creation of these software robots, whose sole purpose is to maximize profit, without anyone asking the question of whether or not they need ethical behavior, or at least ethical limits.
The question is: we would consider ethically acceptable if a human being, in order to achieve a marginal gain overall, for example by arbitrating on grain prices, would lead to a stable increase in the prices of this raw material worldwide, if we were aware of it that in this way, hundreds of millions of people could die of hunger?
Maybe not, someone could ask for it.
If instead the decisions that lead to this situation are taken by an automatic mechanism, or by the concurrence of a significant number of automatic mechanisms at a global level, would anyone ever worry about the consequences and the ethical dilemma that would ensue?
Has anyone, since it already happened, been worried about it?
The answer is no.
In practice, there is a surreptitious shift in the objective responsibilities of an action.
Someone is asked by someone else to create a robot, without any ethics, which, acting autonomously, maximizes the profit of those who use it and shifts responsibilities from those who commissioned it to an abstract and impalpable entity to which nobody asks account of their actions and the ethics of their consequences.
It is possible to ask the algorithm to ask, within its decision-making processes: "As a result of this action one million more people will die of hunger, should I do or should I not do this action?"
Certainly it is possible and complexity is of the same order of magnitude as the algorithm itself, so it is technically feasible, though not simple.
To do this, however, it is necessary to define an ethical science that delimits and formalizes what these ethical limits are and how they are applied and, last but not least, define who and how can force robot builders to implement it.
It is not science fiction to imagine a robot that, not having an ethical code, having no intrinsic limitations, in order to carry out its task, reduces entire nations to poverty or affects millions of people.
It is not science fiction because it has already happened and it happens continuously even now.
Not only has all this already happened and it has happened, but nobody has even the least worried about it.
In addition to the enormous damage of financial algorithms, over the years have been added those, perhaps less devastating, but still tangible, of the proliferation of the use of decision algorithms in every area.
I am thinking, for example, of the case of the management of work shifts in large companies, with a completely depersonalised management which, of course, takes no account of personal needs, obtaining the worst possible result instead of the best.
What worries me most is the general attitude of everyone, who uses, who writes and who is the victim of algorithms.
Everyone lives this absurd depersonalization of responsibilities that results in a fatalistic resignation of those who suffer it.
As if the decision of the algorithm were that of a superior entity, completely devoid of ethics, whose decisions are subjected as subjects to the capricious will of a new divinity.
The absence of an ethical science that puts stakes to the action of the algorithms has produced and produces large and small damages, it is necessary to act as soon as possible to put a brake on this drift.
I believe it is necessary for everyone to visualize and understand that, for an algorithm, to which an ethics has not been implemented, it may seem entirely acceptable that a legal entity, for example a company listed on the stock exchange, to achieve a gain of one thousandth of euros, may decide an action that causes the death of the entire human race. Including the holders of the shares of that company.
An algorithm, by itself, does not have the tools, if no one provides them a priori, to understand that it would be a crazy action, I think it is the case to evaluate the tunnel where we are putting on cheerful and carefree.
Resources
Good work