Research themes

AI:FAR focuses on understanding future trajectories in AI progress; how AI might impact society in particularly profound and long-lasting ways; and how AI can be guided and governed.

We examine specific impact and risk scenarios, such as the role of AI in contexts ranging from scientific progress, to security, to critical systems such as global agriculture. We also study the roles of principles, norms and soft and hard forms of governance in shaping the trajectory of AI. Lastly, we take an exploratory and participatory approach to AI futures, drawing on the expertise of affected communities, technology developers, domain experts and civil society. 

Our work falls at the intersection of three themes:

Recent work

 
Screenshot 2020-11-20 at 16.27.24.png

Covid-19, AI and digital technology

The Covid-19 crisis presents an unprecedented opportunity to leverage AI for global benefit, but rapidly scaling up the use of AI to address a crisis carries its own challenges. A Nature Machine Intelligence article by Asaf Tzachor, Jess Whittlestone, Lalitha Sundaram and Seán Ó hÉigeartaigh explores these ethical and governance challenges, and was discussed in a widely-shared MIT Technology Review interview with co-author Jess Whittlestone.

This will be followed by an article on the ethics of using AI in pandemic management by Stephen Cave, Jess Whittlestone, Rune Nyrup, Seán Ó hÉigeartaigh, and Rafael Calvo as part of a special issue commissioned by the World Health Organisation and the BMJ; under review.

AI:FAR researcher Alexa Hagerty, in collaboration with researchers from the Alan Turing Institute and Ada Lovelace Institute, will provide an article for the same issue on issues of inequality as they relate to the use of AI in the covid response.

Screenshot 2020-11-20 at 16.31.18.png

Capabilities and impacts of artificial intelligence

Seán Ó hÉigeartaigh co-organised the Evaluating Progress in AI (EPAI) workshop held as part of the European Conference on AI (ECAI). The team had two papers accepted to the workshop, and one to the main conference.

• The Scientometrics of AI Benchmarks: Unveiling the Underlying Mechanics of AI Research (Barredo, Hernandez-Orallo, Martinez-Plumed & O hEigeartaigh) examines the research and collaboration dynamics underpinning progress on key benchmark challenges in AI (e.g. image dataset analysis challenges such as Imagenet).

• Canaries in Technology mines: Warning Signs of Transformative Progress in AI (Cremer & Whittlestone) characterises theoretical milestones and indicators of breakthrough progress in future AI development, and won Best Paper Award at the workshop.

AI Paradigms and AI Safety: Mapping Artefacts and Techniques to Safety Issues (Hernandez-Orallo, Martinez-Plumed, Avin, Whittlestone & O hEigeartaigh) maps research into AI safety and risk challenges (e.g. adversarial attacks, reliability and robustness, value alignment) to the literature on different research directions in AI progress.

Screenshot 2020-11-20 at 16.36.36.png

Governance and responsible development of artificial intelligence

Alexa Hagerty ran a 6 month series of small-group discussions, readings, and mini-workshops with people working in industry on the theme “Challenges in Responsible AI innovation”, culminating in a panel discussion on June 17th on emerging factors that contribute to making AI a more fair, transparent and approachable domain.

Alexa has also established a collaborative project on “Citizen Science Labs: A collective intelligence experiment on perceptions of emotion recognition technologies” supported by Nesta.

Haydn Belfield, Shahar Avin, Seán Ó hÉigeartaigh and international colleagues collaborated on a multi-institution report: Towards Trustworthy AI: Mechanisms for supporting verifiable claims. The report outlines a number of technical and institutional mechanisms for translating ethical principles into responsible practices in AI development settings.