People before platform
addressing the ethical dilemmas of Artificial Intelligence is more challenging than some expect
Speaking last week about the World Economic Forum in Davos, one commentator [for the Economist] described the annual gathering of movers and shakers from around the globe as being like a teenage party … “a place riddled with insecurities”. Those insecurities were especially visible when it came to discussions that touched on the ethics of Artificial Intelligence.
The 12 months since the previous Davos gathering gave plenty of reasons for worry when it comes to this topic. To name a few, technology giants including the likes of Amazon, Facebook, Google and Huawei all found their privacy practices under further scrutiny. Facebook agreed a historic $5 billion fine related to the Cambridge Analytics scandal and allegations of algorithmic bias were made against Goldman Sachs and its Apple Card.
At the same time, regulators around the world scratched their heads, pondering what steps to take to address data and privacy concerns and to answer the many other questions this technology raises. That all took place amid persistent concerns about state surveillance in China and “surveillance capitalism” in the United States.
To those developing the technology, the promise of A.I. is clear – as profound for humanity as fire or electricity. But its application is mired in difficult issues, ranging from civil liberties to bias in algorithms and tech companies’ access to the large amounts of data needed to train A.I. systems. The topic is important enough that the organizers highlighted the “safe, ethical and efficient use of data” in the 2020 Davos Manifesto. So participants to Davos seemingly came prepared to do some soul searching around A.I. and privacy concerns. Unsurprisingly, though, answers were few.
There were sessions such as “A Future Shaped by a Technology Arms Race” in which Huawei’s founder and CEO Ren Zhengfe perhaps unhelpfully – but optimistically – compared A.I. to the atom bomb. He said that while some are anxious about the technology, AI is something people will get used to. Another session on “Faith in the Fourth Industrial Revolution” included a Roman Catholic priest who talked about the questions of immortality and the soul that emerging technologies raise. He argued that the Church should have a role in defining the place of A.I. in society.
Just how big a challenge it is to address the ethical questions A.I. presents was probably best illustrated by the fact that rather than suggesting answers at Davos, some of the tech industry’s most influential leaders offered warnings. They compared the topic to discussions on climate change, calling for a global framework and “precise regulation” to govern the technology.
Calling for regulation might seem like an unusual move for these companies. However, based on Davos, there does seem to be a growing realization among them that the questions around A.I. are simply too big and too important to society for anyone to answer alone. And the call for level playing field globally makes sense as a technology makes our world increasingly smaller. A diverse group of stakeholders from across society will have to take part when it comes to deciding just how much privacy should be sacrificed in order to enjoy the promise that A.I holds. It won’t be an easy path to forge, but one with incredible promise if we tread prudently and with purpose.