Navigation

Entries from October 1, 2019 - October 31, 2019

Thursday
Oct102019

Ethics for AI - way forward or false dawn?

Ethical principles have been proposed for AI. A common feature of these proposals is an expert body who develop a framework and rules to govern the development and use of AI. 
This approach should be compared with the counterfactual involving diverse decisions of entrepreneurs making competing bets on the future and consumers acting on their preferences; within a legal and policy framework of market governance. The counterfactual is itself founded on ethical principles. Adopting specific ethics for AI involves two problems. 
First, AI is not a distinct and unique category within which distinct problems or harms are likely to arise. Problems, and the possible need for new ethics and rules, can be expected to relate to a specific application of AI rather all AI; and may also relate to human decisions and institutions. Focussing on ethics for AI involves a category error. 
Second, if ethics for AI is to have any bite, it would involve substitution of the views of a committee for the views of entrepreneurs and consumers engaged in a process of innovation without permission and selection via a contestable process. Ethics for AI would involve the concentration of power in a group and a reduction in individual agency and innovation - an outcome that would arguably be unethical. Given the promise of AI, an immediate challenge is identifying and removing barriers to the adoption and use of AI; and adapting existing law and regulation to a technology and market context which may require different modes of regulation, and potentially less regulation, to the extent that AI enables the market to reduce information asymmetries and better protect consumers from harm. 
Where new standards are justified, they should ultimately apply to all algorithms and decisions, including human decisions. This may require that we keep humans 'out of the loop' where their performance is inferior to that of machine-learning based algorithms; and that will likely prove to be primarily a political, rather than ethical, challenge. 
This paper by Brian Williamson explores these issues

 

Friday
Oct112019

Platforms - growth and policy

Online platforms have grown rapidly as a means of matching market participants and facilitating transactions. Their growth and comparative success reflect the preferences of participants for multisided online platform markets in competition with alternative forms of market organisation and governance – online platforms have grown because they benefit users.

Platforms were enabled by connectivity, mobile devices and the low entry barriers to developing and distributing apps. They offer advantages over conventional business models and regulation in terms of discovery and matching in markets characterised by abundance and in providing effective market governance to participants – online platforms reduce information asymmetries.

To realise the full potential of online platforms, policymakers should first seek to do no harm. While platforms have created some harms alongside benefits, the challenge is to address these in an evidence- based, targeted and proportionate manner in order to preserve their substantial benefits while mitigating those harms.

Policymakers should also remove unnecessary regulatory barriers to the development of online platforms, taking account of the market governance that platforms themselves provide. Not only would this benefit users directly, it would also increase competition in the economy as a whole as online platforms provide a competitive challenge to existing businesses, including in areas that previously saw limited competition. 

Brian Williamson considers these issues in an article for InternMEDIA