AI is currently the product of data-driven learning; it lacks a conscience and is unable to justify its logic. AI has no inherent good or bad; it will simply answer with results that are entirely based on its learning. The quality of AI will thus be determined by how well we train it and, even more importantly, how well we evaluate it.
There have been many AI failures that are cause for concern, including a self-learning chatbot that developed racist and sexist traits after just a few days of public exposure, and a resume screening AI that filtered out younger women with holes in their job background (children).
Many new technology deployments have been tempered by the establishment of controls and practices that ensure the technology's safety. Historically, this has been accomplished by both thoughtful architecture and disaster response. We may benefit from the evolution of technology domains such as airplanes, nuclear power, and medical devices.
We have yet to develop the frameworks and practices that will enable us to test and regulate AI implementations, particularly when it comes to our own protection. Without a doubt, the current engineering emphasis is on safety, but we still don't know enough about how the risks can manifest. Even if AI has no effect on protection, it can still generate an ethical and moral problem by perpetuating unjust practices.
Human behavior is highly diverse; except among those who do not break the law, society tolerates a wide variety of appropriate behavior and attitudes. One of the most difficult issues we face with AI is determining what constitutes appropriate AI conduct. In most instances, there will be only one AI behavior, and we must then figure out how to compromise on what that behavior should be from an ethical standpoint.
Should we even allow such choices? Some autonomous cars allow drivers to choose a different profile based on their own preferences (basically balancing self-preservation with harm to others) – should we even allow such choices?
Graffiti is also illegal, despite the fact that it is mostly harmless and an expression of our ability to express ourselves. Some hacking activities have the same purpose, where there is no malicious harm but they are strong supporters of a certain point of view. Intentional harm (and possible safety implications) to greed and power are both examples of criminal activity (and even wielded by countries). When we consider this in the sense of an AI-enabled future, we have to assume that human imagination and malice would find ways to undermine the AI's purpose, potentially causing societal or individual harm.