Existential risks are defined as “risks that threaten the destruction of humanity’s long-term potential.” The instantiation of an existential risk (an existential catastrophe) would either cause outright human extinction or irreversibly lock in a drastically inferior state of affairs. Existential risks are a sub-class of global catastrophic risks, where the damage is not only global, but also terminal and permanent (preventing recovery and thus impacting both the current and all subsequent generations).
While extinction is the most obvious way in which humanity’s long-term potential could be destroyed, there are others, including unrecoverable collapse and unrecoverable dystopia. A disaster severe enough to cause the permanent, irreversible collapse of human civilisation would constitute an existential catastrophe, even if it fell short of extinction. Similarly, if humanity fell under a totalitarian regime, and there were no chance of recovery then such a dystopia would also be an existential catastrophe. Bryan Caplan writes that “perhaps an eternity of totalitarianism would be worse than extinction”. (George Orwell’s novel Nineteen Eighty-Four suggests an example.) A dystopian scenario shares the key features of extinction and unrecoverable collapse of civilisation before the catastrophe, humanity faced a vast range of bright futures to choose from; after the catastrophe, humanity is locked forever in a terrible state.
Five types of Risks associated with AI
Lack of AI Implementation Traceability
From a risk management perspective, we would often start with an inventory of systems and models that include artificial intelligence. Utilizing a risk universe allows us to track, assess, prioritize, and control AI risks.
Introducing Program Bias into Decision Making
One of the more damaging risks of artificial intelligence is introducing bias into decision-making algorithms. AI systems learn from the dataset on which they were trained, and depending upon how this compilation occurred there is potential for the dataset to reflect assumptions or biases. These biases could then influence system decision making.
Data Sourcing and Violation of Personal Privacy
With the International Data Corporation predicting that the global datasphere will grow from 33 zettabytes (33 trillion gigabytes) in 2018 to 175 zettabytes (175 trillion gigabytes) by 2025, vast amounts of structured and unstructured data is available for companies to mine, manipulate, and manage. As this datasphere continues to experience exponential growth, the risks of exposing customer or employee data will only increase, and personal privacy will become harder to protect. When data leaks or breaches occur, the resulting fallout can significantly damage a company’s reputation and represent potential legal violations with many legislative bodies now passing regulations that restrict how personal data can be processed.
Black Box Algorithms and Lack of Transparency
The primary purpose of many AI systems is to make predictions, and as such the algorithms can be so inordinately complex that even those who created the algorithm cannot thoroughly explain how the variables combined together reach the resulting prediction. This lack of transparency is the reason why some algorithms are referred to as a “Black box,” and why legislative bodies are now beginning to investigate what checks and balances may need to be put in place. If, for example, a banking customer is rejected based on an AI prediction about the customer’s creditworthiness, companies run the risk of not being able to explain why.
Unclear Legal Responsibility
Considering the potential risks of artificial intelligence discussed so far, these concerns lead to the question of legal responsibility.
Need for new age Ethics