Frontier AI Regulation Blueprint - What Do You Think?

by Shahar Avin


AI Regulation Blueprint

It is the summer of 2023, and everyone is talking about AI regulation. So am I. Last week a new paper came out, titled Frontier AI Regulation: Managing Emerging Risks to Public Safety (I'm one of the non-lead authors). The paper lays out in broad terms some of the challenges from the most advanced AI systems of today, and from those we expect in coming years. These frontier AI systems have broad and unexpected capabilities, it is very hard to make them "behave well" reliably, and it is very hard to stop them spreading once they are released to the public. The paper joins many proposals for the regulation of AI, and a vibrant discussion online and offline emphasising the pros and cons of different proposals, against a backdrop of increased state and regional activity on AI regulation. 

To help make sense of my own position, I've put together a first draft of a blueprint for frontier AI regulation, showing how different regulatory mechanisms, most already discussed in the AI regulation literature, could be combined to increase the chances of a good societal outcome as AI capabilities advance (on mobile you might prefer to access the read-only version). I believe this needs to be a broad societal conversation, so I've teamed up with the amazing folk at Cotton Design to bring this to you as an interactive Miro board. Please help make this better by:

  • adding comments on the nodes of the blueprint, 

  • adding your own work or work you've found useful as comments for further reading, 

  • or making your own version of the blueprint. 

We've tried to make it really easy to interact with and adapt. The current content is my own opinionated take, but I hope this will serve as a tool for constructive conversations about how we, as a global society, are going to handle these technologies. At the very least, I hope it will make it easier to not only critique the proposals of others, but also to suggest alternatives in their place.

 Shahar Avin is a Senior Research Associate at the Centre for the Study of Existential Risk, at the University of Cambridge. While he is in discussions with DSIT about a secondment to the UK government, the views expressed here are made in a personal capacity and do not reflect DSIT or HMG views.

Previous
Previous

Comment on the Bletchley Declaration on AI Safety