Six years ago, I wrote a blog post titled “Can AI-Powered Robots Have a Point of View?”. I recently came across a BBC mini-docuentary following up on the AI improv team, so I was drawn to revisit my old blog post given the sheer amount of change in this space.
So what did I do? The first thing lots of people are doing in 2025 - I plugged it into ChatGPT with the following prompt:
Prompt: What is your point of view on this blog post? Given that it was six years ago, what has changed?
The response was long, but was nicely summed up by this quote at the end:
“Despite these advancements, AI still does not possess consciousness or subjective experiences. Its "point of view" remains a simulation based on data and programming, lacking the genuine beliefs and emotions inherent to human perspectives.”
Six years later, my prediction that conversations with AI would prove to be more “life like” really hasn’t come true. While I can have a “conversation” with Claude AI or ChatGPT, it still does not possess the inherently human trait of a “point of view.
So why does this matter? As the BBC video points out, the insight that improv and Generative AI both operate under is that the best way to “yes, and” someone’s idea or line of dialogue is to follow it up with the next, most obvious idea.
The challenge is that an AI model’s concept of what’s obvious is completely different from what you or I might think is obvious. Our own experiences, points of view, values, emotions, biases and health status at any given moment creates a randomness to our ideas. That’s not true with a generative AI model.
So why does this matter? As we look towards managing and leading teams that have semi (or fully) autonomous AI agents interwoven amongst humans, we’re going to have to look at how we continue to honor and elevate the oddities and inefficiencies of human-generated ideas in comparison to AI-generated thoughts.
For example - An AI chatbot will be much better at explaining specifics of policy, processing customer data or accessing FAQ answers. However, a human on the phone asking about the dog barking in the background in the customer’s home will probably generate a more pleasurable and memorable experience (and higher NPS).
In that scenario, who is more valuable to the business? The efficient agent or the customer-loyalty generating person.
And ultimately, who will affect the bottom line in the short, medium and long term more effectively?
These are questions that people managers, HR leaders and executives must be asking themselves now.
I’m curious to know your thoughts - reply back with the kind of conversations you’re having about hybrid AI/Human teams at your organization, I’d love to share some in the next newsletter.