We are currently at a fascinating time in Australia, where the data space has seen an enormous amount of new regulation over the last few years and faces significant potential regulation on the horizon. The likely impact of these changes on the way in which we deal with data and, in turn, the way in which we might deal with ethics raises a number of compelling (and complicated) questions - particularly as they relate to AI.
Technology, Media & Telecommunications Partner Michael Park sat down with Dr Mariarosaria Taddeo, Deputy Director of the Digital Ethics Lab at Oxford University, to discuss the intersection of these themes and the emerging trends around the world.
Data has enormous potential, both positive and negative, which raises the extraordinary challenge of harnessing its power and value whilst mitigating the associated risks. Dr Taddeo outlines some of the key learnings and driving questions raised by this double-edged sword, namely:
- the lessons learned by companies who've 'gotten it wrong' when it comes to the ethics of data;
- the fine balancing act required by regulators to ensure sufficient oversight without stifling innovation;
- the need for greater awareness by users that our data has value, and what we think is free actually comes at a cost; and
- who should take the lead on regulation: government, industry or both?
The growing awareness concerning the intersection between data and ethics in AI has given rise to a rapid expansion in the number of organisations adding a chief ethics officer to their ranks. At the basic level, the role is charged with ensuring compliance with regulations, but, increasingly, it's also becoming an indispensible actor in helping organisations make the ethical choices that impact both their users and their own day-to-day operations.
Accompanying the existing (and proposed) regultory change is an increasing emphasis on 'reasonableness' - a sense of interpretive flexibility the recognises the rapidly changing nature of technology and the fact that what was once reasonable may no longer be just one year down the track, let alone a decade. Alongside this is the question of 'explainability', addressing the problems that can arise once AI begins to arrive at conclusions and outcomes not even their own creators can readily understand or explain. At its heart, this is a matter of accountability, for if we're to rely upon the decisions and processes of an AI system, it must - from an ethical standpoint - be able to explain how and why it made them.