How to Tackle Your Business’s Biggest Risk with David J Bland

Assumptions mapping is a key input for reducing risk. We don’t want people to experiment just for fun or solely based on their personal interests. Instead, to effectively reduce risk, we focus on identifying the most critical assumptions. We prioritize these assumptions based on their importance and the evidence supporting them. We ask, ‘What must be true for this to work, and where do we lack evidence?’

In this episode, David J Bland, author of “Testing Business Ideas” joins Peter and Richard to talk about how to use Lean Startup-style experiments in any kind of business.

Learn More

Connect with David J Bland

Episode transcription

Peter Green

For many years as a program manager at Adobe, I was responsible for listing and reporting on our team’s biggest risks. And most of the things on our list those days were what we would call delivery risks, like would certain parts of the project come together on time, or would this third party contract get signed?

Then we would release the product and it would encounter the actual biggest risk, which is would customers pay for it, use it, like it, keep paying for it in the future? And those risks weren’t on our list, but they should have been. We just didn’t have great ways to mitigate those risks in those days.

Richard Lawrence

Now with agile techniques and ideas from the Lean Startup community, we have better tools to mitigate that kind of customer or market risk by actually testing our ideas early and inexpensively, not just building a thing and hoping customers like it. Our guest today, David Bland, is at the forefront of helping organizations of all sizes test their business ideas.


Before we get into our conversation with David: A quick reminder that this show is a free resource sponsored by the Humanizing Work Company. Humanizing Work exists to make work more fit for humans, and humans more capable of doing great work. To that end, we do training and coaching in three areas:

  • First, we help leaders lead empowered teams and individuals more effectively.
  • Second, we help product people turn their ideas into good visions, experiments, and backlogs.
  • And third, we help teams collaborate better to produce meaningful outcomes on complex work.


So, if you or your organization would benefit from better leadership, better product management, or better collaboration. And if you find our vision for human centric work compelling, visit the contact page on and schedule a conversation with us.


I first met David when he was doing some consulting work at Adobe sometime around 2014, and I immediately recognized a kindred spirit. David’s got a rich and interesting background, including hands on work as a designer, a front-end developer, a develop manager, a technical project manager, and then was an early proponent of blending agile with lean startup techniques to do discovery better.

He also has by far the best LinkedIn account, bar none—something we may talk a bit about today.


David J. Bland. Welcome to the Humanizing Work Show.


Thanks for having me. Nice to see you all again.


David, we buy your book Testing Business Ideas, for participants in our Advanced Product Owner program as a textbook for the week we do about product assumption testing, and we often joke that we’re glad you wrote it because the world needed it, and then we didn’t want to write it.

We were actually in the process of putting together a catalog of tests when we heard that you were going to work on this book, and that did the job for us. How did you come to write that book?


It was through my coaching years and years and years of coaching. I saw all these different experiments and coached teams on it. And then Alex Osterwalder, my partner and coauthor, came to me and said, “Hey, you should write a book on this.” And he didn’t really take a lot of convincing from me to do so, but it took about a year to write it, because even though I had probably ten years under my belt of doing this work, getting it organized that way was quite a challenge.


What’s something that you learned as you were writing the book?


I learned how I describe things, sometimes I try to be too cute and have a bunch of inside jokes and no one else gets those– so, it took me some time to scratch all that away. And I think one of the biggest things I’ve learned was how to organize work. And so, I thought, “Hey, I’ll just organize these by ‘desirable, viable and feasible.’”

I’ve been using these themes forever. It’s nice and clean. And then, sure enough, when I got into writing the book, I realized, “Oh, wait, some of these can be used in different contexts, and you can test desirability and viability with the same experiment, depending on what you’re trying to do.” And so, I sort of had this existential crisis at one point with Alex, and we were kind of in Switzerland. I flew over to help write a bit over there, and I was like, “This isn’t working– how are we going to to organize all these?”

And we finally landed, really kind of taking inspiration from Steve Blank’s work, on discovery and validation. And so, once we organized it that way, it flowed again. But some of these things I thought were straightforward just were not, when I got into the writing process.


Maybe for our listeners who may not be familiar with the distinction between those three different kinds of things you might test, can you run through those and give an example of a test being used for a couple of them?


Yeah. I first learned of this framing, probably from Larry Keeley’s work, for like Doblin group, which also informed Stanford D School and IDEO and it’s really hard to find the single source of truth for that framing. But some of my peers, like Josh Seiden and Jeff Gothelf– I was asking when they were writing books, how far back were you able to trace it?

And none of us really got further back than Larry Keeley. But this idea of user centered design, right? And, looking at risk that way. So, for example, if you think of desirability, it’s a lot around the value proposition, the customer, the jobs to be done, their pains and gains. When you look at viability, it’s a lot about pricing, pricing and costs, willingness to pay–is this financially viable?

And then backstage feasibility? Yes, it’s technical feasibility, but it’s also as an org, are you able to execute this the way you’re organized, and can you prioritize it? And quite often I’m dealing with regulatory risks and such. And so those kind of fit in with the feasibility theme. So if you asked three questions: desirability; kind of “do they?” viability being more “should we?” and feasibility being more about “can we?”


Do you have some favorite test examples that you’ve come across? We always, when we introduce the book, there’s so many write this large catalog of tests. We always point out a few. Like, this one’s really fun, or this one’s really clever, or this does a really nice job of slicing across things. Do you have some favorites?


One of my favorites is concierge. And it’s basically manually doing something at a small scale. And so when you think about that, you can tap into the teams creativity actually, you know, ran some of those at Adobe even later on with their AI stuff. We would concierge things at first. And when you think about multiple themes, you can almost test all three in a way, with that test, you can test desirability because you’re manually delivering something.

You can charge for it. You know, people will charge for concierge services as long as they get the value they need and then feasibility. Can you do it manually or with limited technology available? Can you deliver that value? So it’s a great example of one of those things that taps into teams’ creativity forces them to do it in a small set and also test, you know, potentially all three themes of risk.


It’s usually the first one we mention too. And there are some great industry examples of that like the Wizard of Oz version of that. There’s some fun ones of that. where the user or the buyer doesn’t know that there’s a concierge service happening on the backend, right?


I would say a lot of AI falls into that realm at the moment, although less so now that it’s kind of more mature. But anything where it seems like, oh, there’s just magic happens. And then here’s my– the value that was promised to me. You know, sometimes people are inching their way to closer to real world by Wizard of Oz-ing it. So sometimes there’s like a digital curtain involved or something that prevents people from seeing the behind the scenes.

But what we’ve found with customers is that normally they just don’t care, as long as they get the value they were promised. They really don’t care about all the things you did behind the scenes to produce it.



I think the obvious online course for you to create would be a course on experiments basically mirroring the book. But you recently released a course just on the assumptions end of experimentation. What caused you to focus there.


Yeah. It wouldn’t be very lean of me to do an entire book as a course, so trying to beak it down into smaller chunks, the assumptions mapping is a really key input into the exercise, because you have to be able to prioritize your risk in regard to evidence and importance. And so we don’t want people just experimenting on things that are fun or are just things they want to do.

We want to basically try to reduce risk. And in doing that, you’re looking at “What are the most important assumptions we’re making that would have to be true for this thing to work? And then where do we lack evidence to support those?”

And so, I really love that Two by Two, I picked it up. I would say first off, probably back with Jeff and Josh, you know, from lean UX, and then just customize it to the point where it’s almost unrecognizable at this point and folding it all the different themes, changing the axes and getting it to a point where, you know, it just worked for the teams I was coaching. So it took several years of iteration to get to the point where– the version that made it into the testing business idea, this book.


Yeah. walk us through the Two by Two there a little bit. Tell us what you know. What would be an example of something that might be upright, bottom left. yeah. Walk us through the matrix.


Yeah. Basically, I want a cross-functional team that represents all three themes of risk, if possible. So we think about desirability risk. Who in your org can represent that type of risk around the customer, around your value prop, the jobs, pains and gains and all that. For viability, somebody that can talk numbers, market sizing, people understand cost and revenue side of things.

And then for feasibility, it’s a lot of technical representation but also could be legal, safety compliance, folks that can speak to regulatory concerns. And so once you have all that mapped out, like the risks of each theme, then we go through a prioritization of that and that is “Okay. What is the most important out of these– like so important to the success of the product, service or business idea?” And then how much observable evidence do we have to support it. So recent observable evidence – quantitative, qualitative that supports a statement. So I like writing this as “We believe” or “I believe” statements that way they’re statements. And you could say okay how much evidence we have supporting that statement.

And with that we basically go through that two by two and we realize, oh top right, these what we call riskiest assumptions or leap of faith assumptions. These are things that, if proven wrong, could cause the whole thing to fail. We don’t have a lot of evidence to support them. Let’s focus our near-term testing there. And that’s where the book picks up with all those experiments by theme and by other taxonomy we have there that can help match to the kind of risk in that top right.

Anything below that line, I tend to just ignore; and not from overall, but more from a testing point of view. I don’t want teams to focus on stuff that’s relatively unimportant. And then, for the top left where it’s like important, but we have a bunch of evidence there just diminishing returns to nudging that stuff closer and closer and closer to the to the left.

So, it’s something we usually share out. It’s in our roadmap. It’s in our planning. It’s stuff that we have to do to make sure everything’s successful.

But I wouldn’t focus a lot of my near-term experimentation there, because I don’t want people like snow plowing risk all the way to the end or, you know, ignoring the things that, if proven wrong, would cause it to fail and just, you know, play around the edges of that.

I want to make it front and center of, “Let’s go focus on that risk and pay it down.


How do you help people deal with the psychological barriers to articulating and prioritizing assumptions? Like I want to be right, or it feels too risky to even talk about these things and risk my career on them versus pretending I’m right.


It’s tough. It’s a lot of facilitation. You know, it’s one of the reasons I’ve changed the labels over the years. You know, first it was “known and unknown.” And, you know, people would just say, I just know. And they would put the sticky there, and then we’d say, well, certain and uncertain. They’re like, I’m just certain.

we finally got to a point where it’s evidence base now. So that takes a lot of the emotion out where I say, oh, do we have any evidence to support this?

And when I start talking about evidence, I notice people are less defensive versus what they know or what they’re certain about. And I say, “Look, do we have any evidence that supports what we have here?”  You know?

And so I really do take a lot of the emotion out of the situation as much as I can with my facilitation and try to make sure they’re prepped for it. I, you know, I have a bunch of different steps I take to get into the exercise. You know, I’m not just springing it on them. And when I’m facilitating, it’s very much a, you know, let’s get everybody involved if possible.

We don’t want one person just ranking everything where everyone else nods their head. We want that one person to raise their hand and go, “oh, what? You know what? We actually do have evidence on that because I did something last week.” So when you think about, you know, in terms of agile, you can almost like akin to story sizing or something where you’re just trying to drive out uncertainty about the work.

So many times in my career, I’ve heard people say, “Oh, well, this is a whatever,” and then someone else raises their hand and goes, “Well, you know, let’s– I have this other information we can inject into the conversation.” And so, it’s more about that, like, structured conversation than the exact placement on the map. I want people to talk about risk openly.



Anytime. I’m thinking about using or actually using something that is, trying to be fairly analytical, I’m always concerned that the role of intuition, will get sort of this short shrift there, because I do think that there’s a role for intuition to play in making good business decisions and really just good decisions in life. how do you think of the balance between the two?


Yeah. I think it’s important that intuition plays a role. I think it’s important to have a vision. Right, It’s just more about testing against reality. So it’s more about do we have any observable evidence that we’re on the right track? We think it’s the right track. We feel it’s the right track. Can we go find like our next best test to find more evidence that we’re on the right track?

So, I try to use language that doesn’t make people feel offended. And, I try not to be dismissive, like, because I work in a bunch of different industries and I’ve had some experience, deep experience, in some of the industries like I coach and advise on, but I really try to play the role of, “Well, I’m not going to tell you your idea’s bad, but let’s talk about what the risk is around each of those themes and let’s get a calibration there on our confidence,” because we don’t necessarily want to be overconfident on things that are really light, anecdotal evidence in place, a really big bet there. So I try to keep talking in language of, hey, let’s just get this out of your head. Let’s prioritize it. You know, if we had one customer say yes, just be mindful that it doesn’t mean they’re going to pay for it or you know what they say, and what they do, it could be different.

So how do we do another test to kind of close that gap and so I just try to be really mindful of my facilitation of language and not discredit people or make them feel very defensive. It’s more about I’m just trying to coach coaching through a process to get to like, what’s the smallest thing we can do to find better evidence?


Yeah. I love that idea of the difference between what customers will say and what they’ll actually do. It reminds me of a test: There was a VP at Adobe, actually, I don’t know if you ever ran across Mark or not. but Mark was running, he had a product idea for something in the photography space, so he put an ad in Craigslist or something and said, “If you’re a photographer, meet me at this Starbucks and I’ll buy you a coffee.” And then, he filtered people by asking them, Canon or Nikon if they approached him because they may not choose one of those brands, but they’ll have an opinion about it if they’re actually a photographer, if they’re in his target. and then he would kind of give them the pitch and run the test. And then at the end of it, he would say, “Would you buy this product if I built it?”

And if they said “Yes,” he said, “Great.” And he would pull out a square, the old plug in square reader on the iPhone and say, “Great, can I get your credit card?”  He said it went from out of the forty people who said they would buy it, like four of them would give him the credit card.

So, the difference between behavior and report, pretty critical.


Yeah. Think about it. With currency. In a way. So, think about currency from your customer’s point of view and level of investment. So saying yes to an interview? Pretty low, right? Giving up your email? Still pretty low as far as currency goes, but as you could, like think of as a ladder as you go up, eventually you want to get to something where they’re providing payment information and they’re paying for something.

And you find out how real it is based on that, closer to the real world experience. And so I’m just mindful of that gap. And I think it’s easy for us to get really excited about hearing verbally things that confirm our beliefs. And we want to jump to build. And I think being able to not get too excited by it and say, okay, how are they going to behave?

Is there steps in between we can get them closer to paying? You know, sometimes it’s a letter of intent, sometimes it’s just putting in writing. There are other things we do with my clients where we just try to get closer and closer and realize, okay, we can give ourselves permission to invest more because we have stronger evidence that we’re on the right track.


When it comes to applying lean startup ideas in kind of enterprise settings, not startups, we’ve noticed there’s a buzzword compliance version, and there’s, like an 80/20. What you can apply in a corporate setting when you can’t do the whole thing for whatever reason. What have you seen people do effectively in larger orgs to get actual benefits and not just buzzword compliance on this stuff?


Yeah. I mean, speaking of Adobe, a lot of stuff I did coaching them was like labs branded, you know, apps and things that we would go off and test without leading with the Adobe brand right away. You know, there is an amount of attention you get when you brand something that’s really early stage. People flock to it. They start writing, you know, articles on it and everything and dissecting it.

And we thought, “Wow, can our ideas just stand up on its own two feet without the brand supporting them?” And so if you notice a lot of stuff that even really fed into Adobe Express, right, there are just a lot of off brand tests that, you know, you either spin down when they don’t work, or you take them on brand when they do.

And so the labs branding or project branding, it’s not necessarily, you know, commonplace yet, but I see more companies picking it up. If you go and just browse Indiegogo, you’re going to see a lot of corporate brands on Indiegogo testing new ideas, sometimes hardware ideas.

So you have like Delta doing their first wave, you know, Indiegogo campaigns.

You had GE appliances doing first build, although it’s kind of interesting about that “first” in the name, I’m not sure if that’s a coincidence. And then you have other labs’ brands that are just, you know, it’s like, “Look, we don’t necessarily need the money to fund this idea, but we want to know there is viability here.” And so I’m noticing brand being that number one thing that comes back time and time again, we can’t damage the brand.

And so finding creative ways that you can test without necessarily damaging your brand. I find startups, you really don’t have a brand yet or you probably don’t have a brand anyone cares about yet. Bigger companies. I mean, some of the companies I work with, the brand’s over 100 years old, you know, and so that becomes you have to be really careful because you want to go internally polished, it’s better than perfect. It matches our brand. And then you launch to see what happens versus can we use some kind of labs brand or off brand to just go out and find some real-world evidence? I think that’s probably the biggest hurdle I see inside big companies.


For somebody who is in an internal product owner kind of role, they don’t have the authority to spin up an alternative brand or something; what’s an example of something you’ve seen a person in that position do that still gets the spirit of experimentation into the work?


Yeah. When you think about internal POs and being able to solve for something inside the company, I’ve noticed this kind of weird break with some of my clients where they’re very, very good at experimenting on, or with, customers, but they almost don’t take that internally and use on anything internal. So they treat everything internally almost like an order or a task or requirement to execute on.

But externally they have this amazing experimentation engine. And so what I’ve been trying to do is change that culture a bit and saying, “Okay, if that internal request, let’s say, you had somebody from a group say, I need a dashboard to do X, Y, and Z.” Well, the inclination there is usually, “Okay, let’s start building out what the dashboard could be.”

And let’s break that down into sprints and let’s start building it– versus “Can I do a little bit of discovery on what would they use that dashboard for? And then is a dashboard even the right way to solve for that problem that they’re, you know, trying to solve for?

And so I think taking even just smaller bits of this approach and using it internally and saying, “Okay, what’s the –like, whether the riskiest assumptions about building a dashboard internal?” because I think we’ve all probably worked on dashboards that were elegant and never quite used to their potential.

So that’s why I’m using that as an example. And so can I do maybe like a quick assumptions map on that and say, “Okay, how would we test that internally in a really small way to see if we could find out whether or not the dashboard is the right approach?” And so, I find myself time and time again, trying to help folks that feel as if, “Oh, this is great, but I can’t do it internally” to give them really small techniques and just small ways to test internally versus taking things as requirements you just execute on internally.

Because we’ve all built stuff internally that was a flop. And I think we could have avoided that by just doing a little bit of internal testing. So I’m not saying they’re your customers, but they’re folks that have like these preconceived ideas of how something’s going to solve for a problem and maybe being able to unpack that into assumptions and little, little tests internally can also benefit, instead of just kind of building that out as you would as a requirement.


Yeah. We actually have an episode of the show called. “They’re All customers,” Where we invite people to try out things like a customer profile for an internal stakeholder that feels like they’re trait change resistant and say the thing you’re trying to get them to do; If that were a product and you were a product manager doing discovery and that person was your customer, what are the jobs to be done? What are the pains? What are the gains? Does your thing actually do that? Can you describe it in terms of how it does the job for them? Right? And so I would go so far as to say they are customers. The currency may be permission. but it’s– I like the idea of treating them like customers.


As long as you still have a business, right?


I think, and there’s also– so if I work with, like, let’s say financial institutions, you know, in New England or something, I’ve noticed that you might internally have a small set of customers that are really big revenue drivers. Right? So let’s say you have just ten of those, total, in your entire company.

Then even they fall into this trap where they think they know what the solution should be, and they just tell you to go build it and then it doesn’t quite work. So even treating them with, “Okay, can you explain a bit more about what you’re trying to do and what are the pains you’re experiencing and the gains?”

I love that framework. So, I think even asymmetrically, sometimes you might build an internal tool that might impact revenue more than anything you would do externally. So I think, testing inside, I do like using these techniques internally. So I think it’s just practicing those and feeling as if you have, you know, permission to do that, I think is really important.


As I alluded to in the intro, you’ve sort of got the best meme fu in the business world on your LinkedIn account, and I have a two-part question related to this. First is, do you have sort of a process you’ve put together for tracking down the good memes? Because I notice that they’re not always ones I’ve seen before, but then once I see your version of it, I realize it was out there. Maybe you’re just more tied in. So that’s the first part; is what’s your process? And two, I’m curious if that approach sort of evolved organically, or whether you had a strategy like LinkedIn is dry and boring. I’m going to stand out.


You know, I think I have a sense of humor. I’m not sure. I was always sort of voted Class Clown early on in my career. And I have kind of a very specific sense of humor, though. And so what I’m trying to do is educate people, but make them laugh. Like, not a “I’ll know that’s mean spirited,” more like, “Oh my gosh, I’ve seen that before.”

This like, feels like where I work. And so about the time the pandemic hit, you know, I was just starting my book tour, which was abruptly cut short, and I was thinking, you know, everything’s so serious. Can I just lighten the mood a little bit on LinkedIn? And I was doing it on other networks, but I really doubled down on LinkedIn because I felt like, you know, this is a place where I can really help people who are looking for specific advice and so it just started off me doing a couple tests, you know, and I tried text over image, I tried video, I tried linking to video, I tried all these different things. And then I just landed on, you know, “Oh, wait, if I just have a one line that kind of ties into product experimentation or experimentation in general, that’s related to what I’m an expert in.” So one line that’s the kind of witty that tries to inform somebody. And then I have a video that’s maybe 15 to 25 seconds, not much longer than that, that I upload directly to LinkedIn. I’m not linking to YouTube or anything like that. It’s on hosted on LinkedIn. I would get millions of views on those.

And so it’s not every time I get millions of views, I think the most I’ve hit is probably about 4 million on a post. But I found it like, “Oh, I’m reaching people I wouldn’t normally reach, and I’m trying to educate them while I’m doing it, and I’m not being mean spirited.”

And so, you know, it’s not– they’re not always hits, but I’ve iterated through the process.

I do a little more text over video or text over image now than I used to, because the algorithm also picked that up now too. But it’s just something I’m doing to try to educate people, make them laugh, and then, hey, if that leads to brand awareness, great. But I don’t want people to always take me so seriously, so dry, because they come off kind of that way.

Like my mom used to say, like, “You don’t even get excited at Christmas.” And I’m like, no, I really do. It’s just my personality, which actually makes me a pretty good coach. But this idea of, “Can I just have my humor shine through and educate people?” So yeah, I’ve tested over time. I would say I probably don’t do them quite as much anymore, but I still post them, usually once or twice a week and I just have a lot of fun doing it because I feel like I’m getting through to some people that otherwise maybe I wouldn’t have.


Do you have a favorite?


Oh! Favorites: You know, one of the ones at first hit and you asked this question about where I find them. I find them on either Twitter or Instagram or Reddit, and I find stuff like, “Hey, that made me laugh, so maybe that’ll make other people laugh,” and I can make it related to product and experimentation. So, there was one where, like, the guy was trying to do pushups, you know, and he’s on his, you know, he’s on a push up, like proper push up position.

He’s staring at the camera, and then he has this like coach in a split screen. Right. And he’s like watching you’re doing these crazy pushups. And then there’s this moment where, like, the dog walks behind her and you realize, “Wait, that dog’s walking on the wall.”

I saw that and I laughed– and I was like, “Oh my gosh, this would be such a great post for LinkedIn.” And then I thought, “Well, what does that remind me of?” And I was like, “This reminds me of consultants who never worked at a startup, but like tell you how to run your startup.” And that one probably had a couple million views on it, but it was by far my favorite because it was just like this funny video and it makes you kind of like in a few seconds you do this like, “Ah, what’s going on?” And then, you know, trying to educate people. So I would say that’s probably one of my favorites.


What are you working on right now, David? That you’re excited about?


A few things. I think one is the podcast. So, I started a podcast called How I Tested that. I’ve been told for years that I have amazing “podcast voice.” And so finally coming around to the podcast, I would say it’s incredibly hard because I have to– it’s very different being on a podcast versus, you know, hosting one. But I love that I have some bunch of people that were case studies in the book, lined up to elaborate on those. I have some other folks that are reaching out, and trying to get both big companies and startups in there, just talking real stories about experimentation. I have a community I’ve spun up called “experimenters,” which is– I’m trying to see if there’s a community there or an idea there for people to help each other out with experiments.

So that’s really, really early stage. And then, you know, having the assumptions mapping course out there, that’s such a huge help for people because they can’t always get my time. So just going step by step, how I coach people through that. And I still do public workshops, you know twice a year. So, I’m doing one in April here with Alexander Osterwalder and Strategyser.  You know; how to test this.

It’s like a three-day event. It’s almost like a mini conference when we do those.

So, yeah, definitely really busy. In addition to all my other, you know, coaching and consulting work. So just focused on really new ideas still from idea to product market fit. That’s kind of my niche that I’m really doubling down on.


We’ll link to other things in the show notes so people can check them out. But how do you prefer that people follow what you’re doing in the future and get in touch with you?


Yeah, I would say LinkedIn is a great place to find me. I’m just David J. Bland on LinkedIn. You can find me on other platforms. I’d say that’s the one I’m the most active on. And then, my company is Precoil; so, but I also have my own personal David J. Bland site. If you’re looking for keynotes and things that I’ve done in the past, I also have that one up there as well.

So, I’m easily accessible, I would say.


Well, for our audience out there, David’s created a fantastic catalog of clever ways to test business ideas that reduce that risk of building the wrong thing. I checked out the first episode of the podcast. It’s really interesting; really good examples. Knowing where to start is often still a big challenge, kind of which assumption you should test first.

And so, if you’re struggling with that, check out David’s assumptions mapping course. Like Richard said, we’ll put a link to that in the show notes. and check out the book for sure. We love the book. And anybody that needs some help with any of these things, please reach out to David and follow him on LinkedIn for those entertaining and thought-provoking memes.

David, thanks for joining us on the Humanizing Work Show.


Thank you. Thank you for having me.


And to everybody who is watching or listening. Thanks for tuning in.

Last updated