The emergence of artificial intelligence programs for use in everyday life has received a lot of attention recently. Not surprisingly, this has brought about a mix of excitement and fear over AI’s ability to act as if it were human.
What we need to remember as we consider whether artificial intelligence has a place in our own organizations is that acting as if is not the same as actually being human. The differences are stark and consequential.
Artificial intelligence has all sorts of practical applications ranging from voice recognition, to self-driving cars, to online shopping. Some AI programs can mimic human artistic talents, generating images that can sometimes pass for the real thing. Others, like ChatGPT and Bing AI chatbot, use huge databases of information gathered from all over the internet to answer questions, provide search results, produce written reports and even have human-like conversations.
The pitfalls of some of these programs have already been documented, from false information returned in response to inquiries, to using AI to cheat on academic assignments, to disturbing interactions between person and computer.But the real risk runs deeper, and that’s the temptation to substitute machine output for human creativity.
As leaders, our job is to solve problems for our customers. Humans are good at this because of our ability to imagine new solutions to old scenarios and to deftly manage challenges we have never confronted before. We also understand nuances based on relationships developed with other humans; for example, we know without being told that messages may need to change based on stakeholder groups or individual personalities. We understand how to evoke emotions and change minds through words and art — something no machine can imagine.
Some have argued that using chatbots to churn out basic work products would leave humans with more time to develop strategy and other high-value work. I disagree. It would, I think, increase the amount of time spent on lower-value work by increasing the time needed to fact-check, rewrite and otherwise vet output before presenting “our work” to our clients. More important, it raises ethical questions with regard to whether we are truly providing original work and thinking, as our clients expect.
And that brings me to another concern: That relying on AI in place of human creativity will, rightly, dilute our overall value in the eyes of those who engage with us precisely because our solutions require lots of human sensibility. Just recently, Vanderbilt University’s Peabody College found this out when it sent a chatbot-generated communication in the aftermath of a deadly shooting at Michigan State University. It wasn’t the content that prompted anger, but the fact that using AI for what should have been a heartfelt statement on gun violence was seen as showing a lack of empathy.
Don’t get me wrong. AI does have the potential to drive technological advances, return answers quickly and synthesize large amounts of data into useful outputs. But until machines learn to live life and think on their own, they will always lack the human experience, creativity and inspiration needed to solve nuanced, real-world problems.