There is nothing that can be done to stop it, we just need to step aside and let it do its thing. Hell, it’ll probably do a better job than us anyway, right? This is the mood music whistling loudly from the AI steam train heading our way.
But wait a minute, humans don’t work for AI, it’s the other way around – and it’s a truth we cant allow to get lost in the wind.
Saying this better than anyone right now are 50 experts from a dozen countries, working across a dozen disciplines, who have contributed their world-leading research to a new book that lays down exactly how we can make sure that our growing relationship with AI is always one that is ‘human-centred’.
“Human-centered technology is about aligning the entire technology ecosystem with the health and well-being of the human person,” explains Shannon Vallor, from the University of Edinburgh, one of the world’s most respected experts on human-centred AI. “The contrast is with technology that’s designed to replace humans, compete with humans, or devalue humans as opposed to technology that’s designed to support, empower, enrich, and strengthen humans.”
Vallor says the growth in some of this not so human-centred, generative AI, is being driven by organisations happy to see how powerful the systems can get.
“What we get is something that we then have to cope with as opposed to something designed by us, for us, and to benefit us. It’s not the technology we needed. Instead of adapting technologies to our needs, we adapt ourselves to technology’s needs.”
One of the researchers, whose groundbreaking work weaves through the chapters of Human-Centered AI, is Malwina Anna Wójcik, from the University of Bologna. She flags the systemic biases, the ‘entrenchment of prevailing power narratives’, that are reinforced in technologies built with no input about, or from, marginalised groups. Wójcik says more diversity in research is crucial, as are global initiatives shaped by non-western perspectives.
The message for policymakers
AI might feel like it’s getting into everything at times – something the book elucidates on, but the authors show we have the human-centred antidotes to blunt the more troublesome aspects of this amazing technology.
Any sense among policymakers that they are helpless to control AI is misplaced and must be overcome quickly, says Benjamin Prud’homme, from the Quebec Artificial Intelligence Institute, who has a clear message for decision makers.
“Take the issue seriously. Do the best you can. Invite a wide range of perspectives—including marginalised communities and end users—to the table as you try to come up with the right governance mechanisms,” he adds. “But don’t let yourself be paralysed by a handful of voices pretending that governments can’t regulate AI without stifling innovation. The European Union could set an example in this respect, as the very ambitious AI Act, the first systemic law on AI, should be definitively approved in the next few months.”
How creatives can use AI
Claudia Rinke, currently a fellow at the Pratt Institute in New York, is leading a project exploring media co-creation with non-human systems, looking at how the creative world can work positively with AI, while also prioritising human wellbeing.
“For the human-machine relationship to truly flourish we need to design work systems that nurture the creativity of everyone involved. Wellbeing is paramount.”
For more, read Claudia’s latest article: How to optimise your creativity in the age of AI.
__
Sign up for our newsletter and get the latest innovation news and more sent to your inbox.