Peter Leonard and Toby Walsh ask if the Cambridge Analytica scandal might be a #MeToo moment for algorithmic decision-making and AI?
It is now six months since Harvey Weinstein’s activities unleashed a sea-change in the world of workplace harassment. During that time, widespread adverse comment following the Cambridge Analytica revelations prompted assurances by Mark Zuckerberg to the effect that Facebook now understands that with great data comes great responsibility. Meanwhile, concerns have been raised about diverse effects of data automation: automated decision-making affecting humans; unaccountable robots; excessive and intrusive surveillance; opaque, unreliable or discriminatory algorithms; online echo chambers and fake news.
Many of these concerns are also raised about AI. In addition, rapid developments in AI have prompted a policy debate about whether we are skilling a workforce to work with technology, and whether AI will deliver benefits to a few, while many citizens are left behind. These concerns are exacerbated by a decline of faith in public policymaking: trust of citizens in institutions and governments is at historic lows.
Can we ensure that what have we learnt from Cambridge Analytica is applied to address these challenges?
Peter Leonard is a business lawyer and economist and principal of Data Synergies, a consultancy to data driven business and government agencies. He is also the co-chair of the Global Leaders Data, Disruption and Technology Forum.
Toby Walsh is Scientia Professor of Artificial Intelligence at the UNSW Sydney.
Peter and Toby are members of the Australian Computer Society’s AI and Ethics Technical Committee which is endeavouring to do (quickly and well) what this article says needs to be done.