Can We Train AI to Be a Business Coach?

When you train an LLM on a massive amount of human writing, it turns into a dedicated advice giver. The biggest challenge we have to overcome in order to be effective coaches is our deeply ingrained intuition to tell someone the best way to solve a problem. That’s what most of us are rewarded for, but it’s not coaching, and it doesn’t lead to personal growth.

Richard and Peter get paid to teach leaders and coaches how to coach more effectively—and it works. So, what happens when Peter spends 20 minutes trying to teach ChatGPT how to coach?

Contact Us

If you or your organization would benefit from better leadership, better product management, or better collaboration, and if you find our vision for human-centric work compelling, we’d love to talk to you. Schedule a conversation with us!

Episode transcription

Peter Green

Welcome to The Humanizing Work Show, I’m Peter Green, and I’m Richard Lawrence, and today, we’re exploring a fascinating question: Can we train ChatGPT to be an effective coach?

Richard Lawrence

Coaching is all about unlocking a person’s potential. It’s a nuanced way of interacting with someone where, instead of trying to provide solutions to their challenges, you try to help them explore new ways of thinking so they can solve the problem. For us humans, it takes a lot of practice. Our brains seem to be optimized for jumping to solutions, and giving advice; so we have to slow that down and learn a different way to help.

So we were curious if, in a world where AI is starting to write novels and compose music, could it also be a decent coach?


To find out, I embarked on an experiment with ChatGPT, OpenAI’s conversational AI. My mission? To train it using our favorite coaching model, Doug Silsbee’s “Mindful Coach” approach. I didn’t want to just feed it copyrighted material, so I sort of explained in my own words what I was going to do and asked it what it already knew about Silsbee’s model.

Then I spent a lot of time trying to clarify the nuances of the model. For example, Silsbee’s model includes several roles or voices a coach might take, and I went one role at a time, explaining the role, giving it examples, asking it what questions it had about the role, then moving on.

So, I fed it the guidelines, like don’t share advice, ask one question at a time, assume the person you’re coaching can discover the answer if you ask the right question, etc., aiming to transform it into a question-asking guru, steering clear of direct advice.

And then, finally, I asked it to summarize what it would do, what it would be good at, what a GPT would be bad at, and what it would try to do well. And, not surprisingly, it summarized all of those instructions really nicely!


I notice that you’re treating this interaction like a conversation. You’re interacting with ChatGPT like you would a student who you were teaching how to coach. But, of course, it doesn’t actually understand coaching in the same way as a human coach—even a student of coaching.


Ya, I think it’s important to remember that ChatGPT doesn’t really know any of these things I’m teaching it– it’s not like it’s remembering data. It’s simply predicting the most likely next character to type, one character at a time, based on this multi-dimensional, trained, predictive algorithm. So it’s kind of unclear to me how this sort of nuanced training in a specific chat interaction instance works, and how well. We’ll see a little of that when we get to how it actually performed as a coach.


All right then, let’s get into it! How did you approach being coached by an AI chat window? Did you go all in, just responding the way you would to a real coach, or were you sort of still half being coached, half trying to train the model?


Right, the moment of truth, here. When I trained it, it asked me to keep giving it guidance as we went, and so I asked it to do something that we ask our students in coach training to do, which is “call their shot.” When we’re practicing, we say, “just call your shot.” Say, “I’m going to do this,” and then do it.  So I asked GPT to do this, tell me what it was trying to do in which role. And then I presented it with an actual business challenge related to the shifting marketplace for agile training and coaching, something that’s very relevant for our business. It’s probably something I would bring up in a conversation with a real coach if I sat down with one right now, so I decided to see if this free coaching might help with a legitimate concern.


And what happened?


It immediately sounded like a very skilled person trying to figure out how to solve my problem for me.  It summarized the challenge, acknowledged the complexity and importance of solving a challenge like this, and then began collecting data from me about our business and our market so it could solve our problem.

Haha, OK; so exactly what almost everyone in our beginning coaching workshops does–start to figure out how to solve the client’s problem for them!


Yeah, this was funny, and a reminder that a Large Language Model is just summarizing what tons of data generated by humans indicates would be the next likely move. I tried to play along for a moment here to see how it would use the data it was collecting. Then the ChatGPT gave me a numbered list of common parts of a business model, with a list of yes or no questions for each area. So, basically a total coaching fail here. I’ll read just the first couple of bullets in that list that it gave me, to give you an idea of just the standard advice:

  1. Market Changes: How has the market for training, coaching, and consulting changed over the past two years in your industry? Are there new competitors, shifting client needs, or industry trends that might have impacted your business?
  2. Service Offerings: Have there been changes in the types of services you offer or the way you deliver them? For example, have you adapted to online platforms or modified your training and consulting methodologies?

And that was two out of like six things.


Yeah, that’s an overwhelming list of questions, and there are multiple questions inside each one. Some of them are phrased as yes or no questions, which is an immediate kind of red flag for coaching, but they’re actually the kind of yes/no questions that someone is likely to answer more deeply. Like, if I ask you “Have you adapted to online platforms?” you’re not likely to just say just, “yes,” and then wait for me to say something else.  You’re likely to explain how. Still, I’d rephrase that to avoid that problem, and I’d ask, “How have you adapted to online platforms in your business?”

So, with a little rephrasing, they’re not terrible questions for early in a coaching conversation. I’ll sometimes use objective questions like these early in a conversation just to bring into working memory some of the data that the client might need to reflect on when we get to deeper questions.

But, looking over the transcript, of just those two bullets, I count 11 distinct questions embedded in there, each of which could turn into its own thread of coaching.


Ya, I can imagine being in a coaching session where a coach fired that many questions at me, like, because that was the first two out of six bullet points.  It really would feel like the firing squad of questions, like Ack!  Slow down!  I can’t even keep track of one of these.

So at this point I did decide to intervene in the conversation and gave it some feedback: specifically, don’t give me a laundry list of multiple questions at once, and make the questions ones that are exploratory, not yes or no.

And of course, in the polite way that ChatGPT tends to do, it thanked me for the feedback, and then gave me a list of 3 more questions that it considered more exploratory.

[We both laugh at the irony here]

So I said, “Now, just pick one of those three questions and ask that, instead of asking multiple questions at once.”

And this is also where ChatGPT managed to surprise me. Because the one question it responded with was a killer question, an on the nose, perfect example of Silsbee’s investigator role. Here’s the question it asked me, just so you can appreciate it: sort of admire what ChatGPT actually can do with enough prompting.

It asked, “Reflecting on the times when your company experienced the most significant growth or positive feedback, what unique aspects of your services or approach do you think contributed to that success?”

That’s a really good coaching question.  Maybe a little longer than we would like, but it does the three things, right?  It’s a question that neither the coach nor the person being coached knows the answer to; it will cause them to explore new ways of thinking, different paths, different perspectives; and lead to a new way of thinking. So, it’s a perfect question, in that it does the right things.

And I didn’t prime it with this question in the training, it’s not like it just regurgitated that one to me. It really did integrate the characteristics of great investigator questions and came up with the right one given the flow of the coaching conversation at that time. It was a moment of genuine AI-assisted reflection.


OK, so then, how did it do taking the insight from that investigator part of the conversation into guide and contractor, the Silsbee roles related to moving you towards taking action?


Terribly. Once I shared my insight from this question, it immediately turned into an advice-giver again. It gave me, once again, a numbered list of suggestions of specific actions I might take based on the insight.


Haha, it really just can’t stop with the advice and numbered lists!


Ok, let’s zoom out for some takeaways here. But first a quick reminder.

The Humanizing Work Show is a free resource sponsored by the Humanizing Work company.

And Humanizing Work exists to make work more fit for humans and humans more capable of doing great work.


To that end, we do training and coaching, by actual humans, in three areas:

  1. We help leaders lead empowered teams and individuals more effectively
  2. We help product people turn their ideas into good visions, experiments, and backlogs
  3. We help teams collaborate better to produce meaningful outcomes on complex work


If you or your organization would benefit from better leadership, better product management, or better collaboration, and if you find our vision for human-centric work compelling, we have capacity to take on a few new clients in 2024. Visit the contact page on and schedule a conversation with us.


OK, back to the topic. So, zooming out from your experiment, what does the experiment tell us about using AI as a coaching tool?

A few things come to mind for me. We already know that a good question can unlock surprising insights. And a large language model like ChatGPT is already trained on relevant domain language and what human conversations look like. So, if it’s sufficiently trained on the shape of a good question, it turns out, it can produce that kind of question in interesting ways (if you can get it to just do one at a time).

I imagine, with more training, it would be possible to get over some of the clumsiness with multiple questions at once and with giving advice. But I’m actually kind of surprised it was so difficult to overcome that here, because you did give direct instructions about that.

Now, a big thing that is missing here, and I think is inherently missing, is that there’s none of the intuition or compassion or even awareness of non-verbal cues that you’d get from a human coach. And that’s a thing that’s going to matter more in some coaching conversations than others.


Ya, and as clunky as this was, I was really surprised that I got that really insightful question within 25 minutes of sort of tinkering with ChatGPT, to teach it how to coach, basically for free other than the time I spent. And it did provide some very useful insight into some things we might try as a business (which I’ll talk to you after the show, here, Richard). It feels a little bit like a glimpse into a future where AI might not just be a tool for basic tasks, but a companion for growth and development. Seeing the improvements in just this particular Large Language Model, ChatGPT, over the last year is really kind of stunning.

I think it’s totally feasible that someone could spend a reasonable amount of time training a GPT to be a better coach, and within a few weeks, you’d probably be getting a lot more reliable AI coach that only gives advice when you ask it to, starts to learn to ask a single question at a time and then to explore more deeply before moving to guide and contractor.

And, as you mention, there are things it can’t do that are part of a good coach’s repertoire, the non-verbal cues– body language– tone of voice. But I’m sure that developers are working on that type of thing right now because of how profitable it would be in things like customer service and automated sales, where the ability for a machine to look at a video of somebody and interpret their expressions and things like that has got to be in the works.

It is amusing to me that when you train a Large Language Model on a massive amount of human writing, it turns into a very dedicated advice giver. The biggest challenge we have to overcome in order to be effective coaches is our deeply ingrained intuition to tell someone the best way to solve a problem. That’s what most of us are rewarded for, but it’s not coaching, and it doesn’t lead to personal growth. I am kind of curious if the GPT could be trained to overcome that tendency faster than a human because it doesn’t actually care about itself or the client.


What do you think? Could AI, with more development, become an effective coach? Or is the human element irreplaceable in the realm of personal development?

Share your thoughts with us on our social media, or leave a comment on YouTube. Thanks for tuning in!

Last updated