AI could mean we’re on the brink of an unprecedented historical shift in human development

AI could mean we’re on the brink of an unprecedented historical shift in human development

The fast development in artificial intelligence means that we may be on the brink of an unprecedented historical shift in the human development, for which we need to be prepared. In a new and important report, the AI Sustainability Center highlights the need for more multidisciplinary research in this area.

The 18th century saw the birth of what is known as the Enlightenment, defined as “man’s emergence from his self-imposed immaturity” by the German philosopher Immanuel Kant. Instead of being ruled by superstition and unquestioned traditions, humanity was supposed to take control of its destiny and shape the world through reason. Responsibility and accountability for the well-being of humanity was shifted from God and church to men and women. As a result, the state of human well-being has never been better than today, as shown to us by thinkers and researches like Hans Rosling and Steven Pinker.

With artificial intelligence, responsibility and accountability may very well be shifted away from humanity again. But not back to God. We are slowly moving towards a situation where control over decisions in business and everyday lives are handed over to computers and software with extraordinary power – some might say god-like power – to process and analyze information.

The potential benefits are enormous, since we, with supreme computing power, can tackle many of humanity’s remaining challenges, such as poverty, diseases, and global warming. But the potential risks are significant, unless we are careful to keep under our control the responsibility and accountability for our destiny and ensure that artificial intelligence is used with reason and high ethical standards.

Just how to do this is not clear and will have to be subject to considerable research. This is why the AI Sustainability Center’s report is so important. In the report, problems facing organizations already today, and to an increasing extent in the future, are grouped into four categories, which together gives a good view of the task at hand: biasaccountabilitymisuse and transparency/explainability.

Most of us share a deeply held view that we should be treated and treat each other as equals, without regard to irrelevant factors such as gender, race, religious beliefs etc. We are entitled, we feel, to be treated as humans, not only by the government but by everyone. It is therefore a great concern that some artificial intelligence solutions have ended up reproducing strong biases and patterns of discrimination in society, shown not least by Amazon’s now scrapped recruitment platform who started to prioritize men over women.

Most of us also think that humans should be accountable for the consequences of their decisions and actions. Such accountability assumes, however, a possibility to foresee such consequences to a reasonable degree. It then becomes a challenge with, for example, machine-learning solutions which after a while will be able to make decision with totally unforeseeable consequences. How can an AI be made subject to reason, diligence and fairness? Is it at all possible for us, with our limited cognitive capabilities?

But not all consequences are unforeseeable. AI may obviously be misused for the direct purpose of power, control, and warfare. Is it possible to adopt global standards such as the UN declaration of human rights to prevent such misuse?

To tackle bias, accountability questions and the risks of abuse, transparency and explainability of what the AI is or has been doing will be critical. Transparency brings not only trust but also a possibility to allocate responsibility and accountability. But how can a sufficient degree of transparency be brought about?

In the report, the AI Sustainability Center shows the status of research in these four areas.  I refer the reader to the report for details. While the report shows an increasing degree of research in all these areas, it is obvious that much more needs to be done.

As a lawyer, I am struck by the lack of legal research on AI. The reason of the Enlightenment philosophers has not been enough to bring us to where we are today. Research clearly shows how important laws regarding for example property rights and other legal rules of the game of business and society have been for growth and prosperity. Therefore, it seems critical that a lot of research on AI is done in the area law.

The AI Sustainability Center has given itself a highly important task to create a world-leading multidisciplinary hub to address the scaling of AI in broader ethical and societal contexts. The now published report – a first if many to come over the years – is an important milestone. Not only for the establishment of the Center as such, but also for the purpose of ensuring that human reason with all its nuances is kept in the driver’s seat as AI develops, and not handed back to the dark place of uncontrollable forces from which it was released in the 18th century.