It's reckless to forget people when we talk about AI

How to Put Humans at the Heart of Tech

By Mike Dargan, UBS Global Head of Technology

There hasn’t been a single day in the last few years where technologists have not grappled with the complexities and far reaching implications of the forthcoming era of Artificial Intelligence. Today, there should be no more speculation about whether artificial intelligence is coming. It’s here. The big question now for business, for society, is are we spending enough time thinking about people? If the answer is ‘no’, then we are being reckless.

For those of us responsible for introducing new AI systems—developing the complex, automated processes which create more efficiency and free people from some of the mundane aspects of day-to-day work—there is also the equally important responsibility of making sure the people who are developing our systems are thinking critically about how their work will impact millions of people.  There is a unique but critical dual responsibility for technology leaders to understand and see the full picture when it comes to introducing artificial intelligence.

How are we preparing people for the future?

So when we talk about how Artificial Intelligence will impact society, what can we actually do beyond just paying lip service? Here are three questions we should be asking:

How do we understand and practically see AI?

There is part of me that wonders if we are collectively spending enough time truly understanding how people visualise an AI filled future. It is naïve to assume that everyone understands the way in which AI will impact them—if at all. In the same way that we engage in political discourse and encourage questions about society, we should be encouraging people to ask open and genuine questions about the future of AI. Understanding how people feel about developments in the technology is as important as creating and monetising it. Is there a chance that we are allowing our perception of what AI is to impact how we are implementing it? Institutions of learning, including businesses, have a duty to close this gap between reality and science-fiction, providing a base level of knowledge to this ubiquitous field. Introducing AI in the absence of massive and thoughtful efforts to inform people how their lives are going to be impact is irresponsible and short sighted.

How are we preparing people for the future?

There’s no question that we need to plan, with urgency, how we are educating ourselves, our employees, our children, about the skills needed in a world where we live and work alongside AI. But do we know really what that means? For those who are developing AI based applications, a portion of time needs to be spent learning more about human behavior, studying philosophy and ethics to ensure natural biases do not find their way into the decision-making process of these AIs. A recent report from AI Now highlights the danger of unchecked systems from third party vendors in core public agencies – such as criminal justice, healthcare, welfare and education. We must be alert to the fact that humans create AI, and therefore it has the potential to reflect, amplify and accelerate existing cultural assumptions and inequalities in the wrong hands.

How will we measure and value our time?

In the scenario of an organisation fuelled by automated tasks and processes, many of us should theoretically have more time. But how will this time be filled? And how do we assess how people are creating value? How can we measure cognitive output and contributions? One of the pressing tasks for managers in the next few years will be to create new metrics for assessing performance and new models for capturing ground-up contributions from a wider range of people. Utilising the value of cognitive surplus in a firm is the new challenge we are now faced with.

Artificial Intelligence is here today and it is only going to become more powerful, more pervasive. To be successful in business and for society to truly benefit from its real value, machines must unlock human potential rather than suffocate it. AI will provide the answers, but we need to make sure we’re asking the right questions.