The Liu Debates aims to provide space for critical, engaging conversations with informed participants at UBC by hosting frequent events on a topic of current policy interest.

“Prediction is Very Difficult, Especially if it’s about the Future[1]”: Governing Artificial Intelligence

Darra L. Hofman

Ph.D. Candidate, School of Library, Archival, and Information Science

On 6 April 2018, the Liu Debates brought together a diverse panel of UBC scholars to confront the emerging policy issues around Governing Artificial Intelligence (AI). Ultimately, the discussion raised more questions than answers, but considering these questions can help ground the governance of AIs with an eye towards their all-too-human impact.

A Question of Trust

Some of the potential futures enabled by AI seem laced with undertones of science fiction, either utopian – ceding our economy to the governance of benign AIs – or dystopian – tech-enabled totalitarianism and weaponized AI. At both extremes, and in between, however, one of the major questions was one of trust. Do we trust AIs? For what purpose(s)? Do we trust them as much as humans? Perhaps more?  Governing AIs will require us to answer these questions, but the answers are unlikely to be global. Governance more often happens in the narrow, constrained by jurisdiction, by sector, and by competing human needs and voices. The questions of trust, then, will likely be answered through regulation in fits and spurts: Do we trust this AI for this purpose?

Who Wins and Who Loses?

One of the great aspects of AIs is that they can improve the ability of machines to produce better products and services than humans. AIs can make dirty, dangerous work easier and safer, even saving lives.  Even cognitive tasks could potentially be automated; from robots on the assembly line to AIs in the law office, perhaps? Knowing as intimately as we do the limitations of our finite human faculties, the question exists as to whether there is anything computers won’t, eventually, do better than we can. Discerning what’s probable – or even possible – with AIs from what is mere hype is one challenge of governing AIs.

A larger challenge is how to handle the displacement of humans from jobs when those jobs can be done better, faster, and cheaper by AIs. Will AIs save lives but destroy livelihoods? Industrialization has long used humans as machines. We might, at last, be reaching a new inflection point where machines are better at being machines than humans. If so, perhaps AIs can help lead us to a utopian future, wherein humans are free to pursue greater meaning, to profit from the “idleness” so praised by Bertrand Russell. Perhaps we can overcome the fetishization of jobs brought about by the fundamental reorganization of society wrought by the Industrial Revolution.


Questions of meaning are secondary to questions of survival. One cannot enrich one’s mind or soul when one cannot keep body and soul together. Working solely for meaning is the fate of only a privileged few. Most people work for an income. How do we provide people with an income in a world without (or with very few) jobs? Will the stigma of government-provided income disappear if it becomes more universal? Even if the not-so-small problems of displacement, poverty, and widening dual economies are solved, AIs will still pose thorny regulatory challenges that could directly harm some people for the benefit of others. Will the production enabled by AIs actually meet our needs better, or will it lead to yet more “more food, poorer health” scenarios? Is cooperation possible to avoid a global race-to-the-bottom?  Could we actually use AIs to make these trade offs more explicit?

The Future: Like the Present, Only Longer

Despite the enormous fears and hopes around AI, the challenges of AI governance identified at this debate had little to do with either Skynet or the Jetson’s Rosie the Robot. Instead, they were the mundane questions of governance that have bedeviled humans since time immemorial. They are questions of distribution and optimization, of how to regulate so as to encourage innovation while minimizing the harm to those whose skills innovation renders obsolete. Perhaps the future isn’t too different from the past. Perhaps, the primary governance challenge concerning AI is the stuff of history books, rather than science fiction, as we task ourselves yet again to regulate the present to safeguard a future that we can only hazily imagine. 

[1] Disputed, attributed to Niels Bohr in Teaching and Learning Elementary Social Studies (1970) by Arthur K. Ellis, p. 431