Feature Mining Q&A

New, big things usually add the most value, but new things are also the most complex—so complexity and value correlate. Which means when we talk about quick wins, genuine wins are often not found outside the complex domain…Feature Mining breaks that conflict, decoupling what makes it big from what makes it complex so that we can have a quick win in the complex domain.

Feature Mining is a technique for figuring out where to start on a big, complex idea. Following up on a Feature Mining webinar on February 6, 2023, in this episode, Richard and Peter answer questions from the webinar and from other workshops on the topic of Feature Mining. If you’re not familiar with the Feature Mining technique, don’t worry, there’s a short intro to it before the Q&A.

Resources Mentioned

Watch the Feature Mining webinar
Try the Feature Mining Miro template
Learn more in 80/20 Product Ownership or Certified Scrum Product Ownership

Episode Transcription

Richard Lawrence

Welcome to the Humanizing Work Show. Last week we did a webinar for Miro about our feature mining technique for finding a good first slice of a complex, big idea. There wasn’t time to get through all the questions in that short time box so in this episode, we’ll tackle several remaining questions from webinar participants as well as common questions we get about feature mining in our workshops when we teach it there.  We’ll link to the recording of the original webinar in the show notes so you can check out the whole thing. But since not all our listeners are familiar with feature mining, we’ll also do a quick overview so the Q and A still makes sense to everybody. 

Peter Green

We love answering questions from our community on any topic related to the content we talk about. If you have a question you’ve been noodling on, email us at mailbag@humanizingwork.com and we’ll see if we’ve got a good answer for you. And before we jump in today’s episode, just a quick reminder to rate and review the Humanizing Work Show in your podcast app. Or if you’re watching on YouTube, please subscribe, like, and share today’s episode if you find it valuable to you.Your help spreading the word is super meaningful to us. So Richard, why don’t you start by giving a quick background and overview of feature mining, and then we’ll work through the various questions that we get about how to use it well.

Richard

Sounds good. Feature mining comes originally from when I’d published the story splitting flowchart and it had gotten popular, people would come to me with questions about story splitting and I started getting a pattern in the questions along the lines of, “How do I use this stuff early in a new project or initiative or something, rather than just working incrementally to build on top of a foundation?” So at that time I worked with a few of my clients who were starting new big ideas to figure out what’s a systematic way we can find good early slices of these big complex ideas? In the early experiments we just used the focus conversation method to collect the data and make sense of it together. And then once I started seeing patterns in that, I turned feature mining into its own step-by-step thing. But it’s still, if you’re familiar with the focus conversation ORID method, it’s just a particular focused conversation. And we talk about that in our episode on retrospectives. So we’ll link to that in the show notes for people who want to see kind of behind the curtain. 

Now, briefly, here’s how the feature mining technique works. You get a group together that has different perspectives on the problem side of things and the solution side of things. So usually some business stakeholders, some technical people; we want to bring those different perspectives together into one room. And then we have a conversation where we brainstorm on four topics. What impact are we trying to create? What makes it a big, large effort we need to slice through? What’s risky about it– so what could go wrong– what’s uncertain? What are the questions we need to answer to be successful? We brainstorm on those four topics. We filter those using something like dot voting to get to the most important in each category. And then we ask, “How can we get this top impact or start to get this top impact without having to take on all this bigness? What if we just…?” And brainstorm there. Same thing for risk. How can we begin to mitigate the scariest risk without having to take on all this bigness? “What if we just…?” and finally on uncertainty, “How can we begin to learn about this thing, resolve this uncertainty without taking on all this bigness? What if we just…?” And from that brainstorming of different ways to slice, we find, “Here’s a combination of things that would be a good first experiment, good first thing we could build, that would go straight towards the core complexity while avoiding a lot of the things that make it large and complicated.”

Peter

Great. So some of the questions about feature mining that came up both in the webinar and that come up with participants in our workshop are kind of about when to use it, and I’ll give you a few examples of that. For example, does it make sense to use on what’s called a complicated project, where things are relatively predictable versus a complex project where things are more uncertain? Let’s start with tha one.

Richard

I really designed feature mining for the complex domain, in Cynefin terms, where we have some amount of uncertainty and emergence and the way we find the order; the way we make sense of what we’re facing, is to actually do something. And this is a thing that product people experience all the time where you think you have the requirements for something and you build what people asked for, you get it in front of them and they say things like, “Oh, you know what? Now that I see it, I actually don’t want that. I want this other thing.” That’s complexity. They were discovering their preferences as they saw something concrete. So we need to act in a way that will cause those preferences to be revealed or cause us to understand more about this new problem or new solution space we’re in.

And so when we’re in that situation, we want to find a small slice that actually involves action, not complete analysis. But something we can start on, something we can build; in product development, that’s often a first feature that will teach us something if we’re using feature mining in other domains, like organizational change, it’s like a pilot team to see how this organizational change works out. In the complicated space in Cynefin terms, the data already exists in the world. And so the way you address that is by analysis. Like, “Let’s figure out all the variations and come up with a logical way to proceed through them.” But you can do that analysis and make a plan and be right in advance without having to get in there and take some action early. 

Peter

This is a little bit related to a question that Ashley asked during the webinar, where she asked, “Are there particular problem attributes that make this more beneficial than another framework?” And certainly complexity is one of those. Are there any other characteristics or attributes that would cause you to say, “We should probably feature mine this,” or ones where you would say, “We probably shouldn’t use feature mining on this”? 

Richard

Complexity plus scale, I think.You’re investing, you know, an hour, hour and a half of a group of people’s time into figuring out how to start when using feature mining. And if you’re talking about an afternoon of work, spending an hour to figure out how to do an afternoon of work doesn’t make sense. So this is most useful when the scale of the work is large enough that it’s worth spending an hour to figure out even how you’re going to start. Now, I still use this thought process for smaller things that impact bigness, risk, uncertainty, kind of pattern. I’ll think through those things even on something small to figure out the core complexity and go there first. But I wouldn’t do it as formally if it was something small. Now there’s another kind of question behind the question here, I think, which is, “How do I recognize complex things?” And I find it useful to think back to past experiences and ask “What is blowing up our plans in the past? What has created uncertainty or change?” and look for those attributes. And it’s probably things like where we’re doing something new, where we depend on the future behavior and preferences of humans, where we have external factors outside our control, maybe dependencies on other things. The more you see those things, the more feature mining is gonna be the right tool to start with.

Peter

Yeah. There’s an interesting, I think, corollary here. One of the questions asked by Ian during the webinar, Was about, “When would you use this versus say the story splitting?” And I think that answers that question as well, that the story splitting is probably on a much smaller scale.Those patterns are pretty recognizable. Feature mining is really when we’re earlier on in a big thing. Right?

Richard

Right. And the story splitting patterns are kind of an attempt to move how to split from the complex domain to the clear obvious domain. You can recognize the pattern. Oh, there’s multiple operations here. Oh, there’s multiple business rules here. It’s obvious how you would slice through that. So if you can recognize those patterns, you probably don’t need to do this kind of thing. You can go straight to the solution. 

Peter

There was another really interesting question about context here where somebody said, “Hey, why would you use this instead of something like the lean canvas?”

Richard

And I think you could answer that better than I could because you know some of those lean startup tools more deeply than I do. What’s your answer to that? 

Peter

Yeah, well I think the Lean Canvas is a nice way to think about all of the components of a business idea and how they fit together. So the Lean Canvas has things, for example, like, key customers or target customers, the description of our solution, a value proposition. It also includes things like the channel we’ll use to reach our customers and things like the business model underneath it, revenue and costs and things like that. And so I think of the lean Canvas as a way to more fully describe the big idea and then feature mining as the tool we would use to take the big solution that we have in mind and come up with the first slice of that solution.

I could see that applying to multiple areas of the canvas as well, though, if there’s some complexity on which channel we should use to reach our customers, I could see using feature mining to come up with the first channel test we could do.

Richard

Right. And in feature mining that would show up in the risks and uncertainties, like “Can we reach our customers through an affiliate channel?” might be an uncertainty and the impact would be focused on the outcome you’re trying to create. But you’d have to run an early experiment of “Can we create that outcome through that channel?”

Peter

Nice. There was one other question that came up that was sort of related to how you would use it, but from a different angle. This person asks if you could, you know, “How would you or could you approach feature mining if you’re already in the middle of a project and you notice it’s, as they put it, off the rails?” How would you answer that? 

Richard

(Chuckling) Yeah. There’s a tendency to want to start our big things on the right side of Cynevin, on the ordered side, like “Let’s build the foundations, the things we know about. Let’s get some quick wins. Let’s move into the things we analyzed.” And then finally, “Let’s go towards the uncertain complex stuff.” And a lot of times a project off the rails is one that looked really good for a while because you were doing the predictable bits, but you were putting off the complex part, and then you finally start dipping your toe in the complex part and things blow up in some way, or it becomes clear that it’s not working. So our answer to “When do you use feature mining?” is to start big, complex things. But if you’re in the middle of a big thing and you haven’t really thoroughly probed the complexity yet, it’s still the right tool to figure out “How can I go straight over to the complexity,” and go there first. And so it can be a good way to put things back on track. And if you’re off the rails, you probably have a more tangible sense of what your risks and uncertainties are at this point. So it’ll push you towards the most important ones there to try to bring it back. 

Peter

There was another question that came up related to this. As you were describing feature mining, you were describing the brainstorm where you say, “Well, what if we just… what if we just?” And there was a question that came up about how, “What if we just…?” is different from those early small wins that you were describing when we get started and, and just hit the clear domain, knock out some small wins. How would you distinguish those for that person that asks that question? 

Richard

I think most people assume that complex means big. The two are together. And so if I want something small, that will give me some quick results, some quick sense of progress, that’s necessarily not in the complex domain. And the problem that feature mining is solving, really, is breaking that conflict; decoupling what makes it big from what makes it complex so that we can have a quick win in the complex domain. Not to mention, typically, a major source of complexity for us is newness: a thing we’ve never done before, right? New people, new technology, new problem, new customer, whatever is a core source of complexity. The newness is usually also the very reason why we’re doing whatever we’re doing. So complexity and value correlate. Which means when we talk about quick wins, genuine wins are often not found outside the complex domain.

Everything else is about preparing or scaling the win. So if you avoid the complexity to try to get a quick win, the win isn’t really a win that matters necessarily. So we want complex but small. This came up actually this week in my C S P O course. One of the participants was kind of laughing at the end because their group had found something small in our practice feature mining session, but had avoided the core complexity to do it. And as soon as I shared the feature that we picked when we went through the same feature mining session they were laughing because it was immediately obvious, “Oh. We avoided the complex thing. You went straight towards it. I get it now.” And it was a fun breakthrough in the class to see that contrast.

Peter

You already answered one of the questions that comes up frequently about how long a session takes. Right? 60 to 90 minutes. There was another question that came up that sort of has a question underneath the question of how long it’ll take. And this question said, “You know, outside of the typical agile ceremonies or the scrum meetings, for example, how much additional time do you take to do all these meetings on feature mining? How do you answer that?

Richard 

Yeah. Meeting fatigue is real because people experience a lot of bad meetings and cultures where everything requires a meeting. So I hear the pain behind the question. We want to make our meeting time count. So if this is a thing that you generally do once at the beginning of a big complex idea, it kind of just falls into the bucket of backlog refinement. It’s what backlog refinement looks like when you’re at the beginning of a large, complex thing, and then as you get further into it and the order begins to emerge, backlog refinement looks like smaller conversations and breaking down features into stories and elaborating the details as appropriate. So if we assume that a Scrum team should be spending about 5% of their time collectively in “Look Ahead” kind of work and a Product Owner spending a larger percentage of their time working outside the current sprint, this should just sort of fit into that in a week where we’re doing feature mining, we’re probably doing less of other things and it kind of works out.

Peter 

Great. Let’s talk about some of the questions that came up about who is in the room physically or virtually when we’re doing feature mining. And an example question that came up was, “Do you always need to have both business and technical in a feature mining session, or could you hold similar sessions separately and then combine them later?”

Richard

The whole reason why we ended up with feature mining is because the traditional approach is to do these things separate. You’ve got business people making big plans around business impact and rollouts in business cases, you’ve got technical people making big plans around the solution and architecture and design and scalability and all those kinds of things. They finally get together to figure out where to start and they’ve both dug into,“This is big and it needs to be big to be successful.” So when we get everybody together before there’s a really strong commitment to how we’re going to solve the problem, we get to sidestep the tendency to cement it into a big solution. So I would strongly, strongly caution to avoid going too far in these separate conversations before pooling what you know and working together to find a way out of it.

Peter

And let’s take the category of quote unquote business or customer voice in that, because a Scrum team has a Product Owner, if they’re using the Scrum roles, who is supposed to represent the customer voice? How often do you see it just be the Product Owner representing that, versus having other stakeholders? Or really the bigger question, how do we feel more confident that the customer voice is really in the room versus some echo or proxy of what the customer might say?

Richard

There’s several, I think, different questions kind of tangled up in there. One simple answer is that this isn’t a replacement for customer research, and that often comes first. And indeed when we practiced this in a C S P O class, we’ve done some customer research, customer profiles, some vision around why the solution matters, and then all of that informs, particularly impact, risk and uncertainty, when we get into the feature mining session. So there’s this assumption you’ve done some customer interviews or other things leading into it to give you some confidence because those are faster, cheaper experiments. 

There’s also the question in here of “Who do you include, like what perspectives are actually in the room?” And a happy accident of the two places where I originally worked out this technique was that they were strong consensus cultures where everybody had to be in the room for everything and everybody had to be heard. So that made it necessary to work out a facilitation method that scaled well. And what we discovered was that when there were, in a sense, too many people in the room, it went a little slower; but we got really good alignment at the end around why we’re taking the approach that we’re taking. And that was a pretty nice benefit; So now I suggest, if you’re thinking about whether you should include somebody in this conversation, include them. And as it gets larger, facilitate more carefully. 

At a certain point, probably get somebody else to facilitate it. It’s often the Product Owner who facilitates these things, but as it gets larger, the Product Owner may need to be more of a participant and less of holding both roles, which is kind of hard. And so getting somebody else, you know; a coach or Scrum Master from another team, maybe even an outside facilitator. We’ve occasionally done this for clients on really high stakes–feature mining sessions for major strategic initiatives. As you’re preparing to facilitate one of these, let’s say that you wanted to do it, how much pre-work do you have people do? Do you have people start thinking about  items in the various different columns ahead of time? Do you do anything asynchronously or, do you find that it’s better to not prime the pump too much and have them respond in the moment to the questions? 

Richard

I’ve never had people do pre-work for feature mining explicitly, but the question made me realize that in the organizations where this became a strong part of the culture–just how people start big things–individuals started thinking more and more in these categories and they would show up much more prepared. Having thought about impact, bigness, risk and uncertainty, which makes me wonder if there is something to not coming up with all your answers in advance, but reflecting a little bit, thinking about what data you have that you could bring to the table about those four things, and that might produce a better outcome. So this is different from trying to do it asynchronously, which I think is a bad idea. I think good things come out in the conversation when you’re brainstorming together and giving each other ideas. And so I think we should actually visualize it and talk about it synchronously, whether that’s on a zoom call with Miro like we often do, or in a room with a whiteboard or any number of other tools. But I think coming prepared, having thought about these things probably has more benefit than downside. 

Peter

Some of the next set of questions are really about the nuts and bolts of facilitating feature mining, and so I’ll ask those one at a time because I think largely there’ll be quick answers, but probably really useful since these sort of the types of questions we often get about how to facilitate it once you have the right people in the room. Do you approach the brainstorming portion of these lists as brain writing where everybody writes individually and then brings them in, or do you have it more, you know, “Hey, say it and then write it.” Is it more conversational? What’s your preferred approach there to the brainstorming lists part? 

Richard

It’s important to know your group when you’re thinking about how to brainstorm. And let’s, just to kind of stereotype a little bit too much, you’ve got, in many groups, the extroverts who think out loud and the introverts who need to get their thoughts together before they speak. And it doesn’t quite correlate to those preferences because that’s not what introversion and extroversion is.

But there’s a pretty strong pattern there that I see on the groups I facilitate. And if you notice that dynamic in your group, like the same people talking all the time, when you do a “Shout it and write it” kind of pattern, you might want to say,”Let’s think on your own. Write down some stickies and then share them” like the on- off- on technique that we talk about in our retrospectives course where everybody writes, we take the things off, then people take turns,  sharing items back, and you delete any duplicates that you have. So only unique items get shared. That can be a nice balance between efficiency and still giving people a chance to get their thoughts together before sharing. And so I’ll use either of those approaches, like just shout it and write it, or more structured–write it first, then share–depending on what my group seems to need.

Peter 

One of the questions is about, “How do you keep people focused on the actual lists and not starting to blow up the scope of the idea when you’re really asking about all of the customer impacts, and people might get excited about it and start saying, oh, we could also, we could also” and the risks and the bigness, and there is a tendency, I think, when we’re in those types of discussions for the idea to get even bigger. How do you prevent that in a feature mining session? 

Richard

I’m not sure you do mean to prevent it. Feature mining has the kind of standard game storming structure to it of divergence, exploration, convergence, generate a bunch of possibilities, make sure we all understand them, converge down to something to focus on. And we do that twice. We do it once on all the data around those four categories. And brainstorm, then filter. And we do that same thing on brainstorm ways to slice, then filter down to the first slice you’re going to do. And I think getting a lot of breath and divergence in the divergent stage of those moves doesn’t really hurt as long as you do the filtering carefully and ask questions like, “Out of all these different impacts we could create, if we could only guarantee one of them, which one would we guarantee?”

And then we put our focus on that for the rest of the conversation. I even find it beneficial when people put wrong things on the board, not deliberately, but, discovering in this conversation that you and I are both excited about this, but for wildly different reasons. And one of those may not even be true. And so now we see that that’s out there and we can align around what’s actually true and useful to us. So I would let it grow in the divergent phase. And then use the convergent phase to bring back the focus to where are we going to start? 

Peter

There’s a question about the difference between risk and uncertainty, which sit in the same column but can feel a little mushy. Could you help clarify that? Maybe if you have an example that you could share or just clarify the concepts, how they’re different? 

Richard

Risk and uncertainty are in one column, because originally they were one question, where are the risks and uncertainties? And then I found over time that groups thought more clearly about these things.They talked more clearly about them, if we captured them differently. If we describe a risk as something detrimental happening, something going wrong that causes us to not experience the impact we want, and capture uncertainties as questions we need to answer that are somehow essential for success, there’s definitely overlap between those because risks have uncertainty and uncertainties are risky. And so I let duplicates come up if they need to. And I’ll give you an example of something that could come up both ways. We might have a risk that we build this system and people don’t use it. That’s a thing going wrong. We might also have an uncertainty that seems to touch the same thing. Like, “How do we know people want to use this? Or “Do people want to use this?” is an uncertainty, right? When we see them phrased both ways, we can decide, is this a risk we need to mitigate, like a thing that could just happen and we’re trying to prevent it from happening. Or is this an area of uncertainty where we could surface some information that would inform us and make us more likely to be successful?

I think in this case, it’s the latter. We’re not just trying to protect ourselves from fickle users, deciding not to use it. Rather, we want to resolve the uncertainty around this and understand more about our users so that we make it very likely that they use it. So it’s mitigating versus learning that distinguishes between which one you would choose when you’re into the filtering stage. So during the divergent stage, let them come up both ways. Some people think more about things going wrong. The risk question is for them. Some people think more about what are they curious about, what do they want to learn? The uncertainty question is for them; then when you look at it together saying, what should we mitigate? What should we learn about? 

Peter

In the webinar, Lucy asked a question. She framed it as, “What would make you not progress with an idea using this format? I’m curious if you have times or examples when you got halfway through future mining and then said, “You know what, we shouldn’t keep doing this”?

Richard

There was one example at a client that I loved where a particular stakeholder that had a lot of power in the organization was known for asking IT to build things that didn’t get used . And it was really frustrating for the IT leadership and, and teams that, “Oh no! This guy’s asking for something, it’s probably going be a waste, but we have to build it.” So we keep him happy because he’s got the executives’ ears. So when they started adopting feature mining in this organization, instead of trying to argue with it and not have to build this thing, they switched from “No, this is a bad idea.” To a “yes and” kind of approach to, “Oh, that’s interesting, and here’s how we start with a new idea. Come on down to our office and let’s do a feature mining session.”

And they got into it. And about 20 minutes into the conversation, this person realized, you know, the impact here isn’t very strong and the risk is really high, and there’s kind of an existing solution for this, and he said, “You know, this isn’t really going to work, is it? Thanks for helping me get clear on that. Let’s not build it.” So in a 20 minute meeting, they effectively canceled a half million dollar initiative. So this might help you decide. I’ve also seen that happen later where you make it all the way to a first feature. You build that feature or you run that test or whatever it is, and you discover there that you had wrong assumptions, like, the impact doesn’t work out, or this risk can’t be mitigated. And that’s a nice cheap way to find that out rather than building a big solution and finding out late that it was the wrong thing. 

There’s another angle on this question; another way to not progress, which is more like not progress with the solution, but rather than not progressing with the solution, it’s more like not experiencing the benefit from feature mining. We talked about this in a recent C S P O course where one group realized that they hadn’t defined the impact very well, and they didn’t end up with a very good feature because the impact was kind of a high level nebulous thing, like happier customers. and it didn’t really help them figure out what is it about this particular initiative that would produce happier customers. So another way to sort of not progress, but not any “we learned it was wrong way,” is we didn’t really hone in on the right desired impact, and then everything else downstream of that was unclear. So get the impact clear.

Peter

As you were describing getting an MMF and actually building it and then discovering that

the impact wasn’t actually there; I think that example answers a couple of the questions about what happens after feature mining. Like Pascal in the webinar asked, “Is there a back link from market to the shipped MMF?  Like, how do you handle the wrong assumptions? And this really points out that this does not reduce the uncertainty, it doesn’t reduce the complexity.

It gives us the first probe, which will allow us to test that, right? To run that probe to learn from the market, but it allows us to do that in a fast, kind of high impact way. Like what is the 80/ 20 thing that we could build that would allow us to learn from the market? So I think it’s important to clarify that as an outcome of feature mining. It does not actually test the market, but it allows you to design the right experiment that will most effectively test the market. 

Richard

That’s right. I wish we hadn’t called this feature mining originally. Once you name a thing, it kind of, the name is out there. But it’s really experiment design. You get to the end of this, you’ve designed an experiment.The most common experiment in product development at this stage when people are doing this is a feature, but even with the feature, it’s often useful to say “What hypothesis is behind this feature and how are we gonna know if we were right or wrong?” And then actually follow it through to, “Did it go the way we expected? Did we learn what we thought we would? And so what?”

Peter

There was one last question that came up in the webinar, and we kind of had a hard time sorting this into the categories as we’ve worked through this video, right? We kind of went in a flow like, when should you use it getting started? Who should be there? How do you facilitate? What do you do after? 

This one, I wasn’t exactly sure where to put it, but I thought it was such an interesting question that I want to talk about it anyway. And Kelly in the webinar asked, “What do you do when process or tool fatigue sets in?” She says, “In my experience, novelty and process helps humans stay engaged, (which we agree with) even when the new problem should be novelty enough.” So how do you respond to that question?

Richard 

Yeah, I think process and tool fatigue isn’t exactly process and tool fatigue. I think it’s fatigue with processes and tools that don’t feel like they’re directly connected to outcomes that you want. I can think of my own life. There are a lot of areas where even though I love the complex domain– I value new and interesting things– I have really strong routines and I keep going back to the same things. 

Like, there’s a joke at our favorite restaurant my wife and I go regularly, at least once a month, where we’ve gotten to know the owner really well; and we laugh every time because she’s trying to eat through the whole menu, and I get the same thing almost every time because I’m going here because this thing is really good. That’s what I like about this particular restaurant. I don’t need novelty there. I need effectiveness. And in that case, effectiveness is I want to eat that beef mole because it’s really good. And so I think about the same thing here with facilitation methods.

This comes up in retrospective sometimes where your retrospective process doesn’t really work and it feels like you need novelty. So you use some game or something for your next retrospective and maybe you produce some better outcome, but what you’re really seeing is the outcome was the important part, not the change in the process in itself. And so I think bringing that back to feature mining here; actually talking about the impact we’re trying to create, getting clear on what makes it big, surfacing the right risks and uncertainties is inherently interesting because the problem we’re trying to solve is interesting, and the questions and process are lightweight enough that they should get out of the way and our focus should be on the problem we’re trying to talk about.

If that’s not happening, it’s sort of a sign people are looking at the process rather than through the process at their problem. And we may need to simplify the process or bring the focus back to the core question we’re trying to answer here and not get hung up on this process. That should kind of be invisible to us.It’s just us having a focused conversation around particular things. 

Peter

Two things. Are you gonna shout out the restaurant or are you gonna try and keep it a secret? So you always have enough beef mole. 

Richard

Oh I am conflicted here, Zocalito in Denver. It’s an Oaxacan place downtown. It’s one of my favorite restaurants anywhere.Tell them Richard Lawrence sent you if you go.

Peter

And then the other comment on that is that I do think that I agree with you a hundred percent that when a process consistently gives you great outcomes, variation is not the thing you need and I still think it’s a great idea to think of ways to mix it up a little bit to have variety because there is, you know, and again, this is a human preference thing, some people have more need for variety than, predictability, right? And so even for something like feature mining, as long as you’re doing the process in order and getting the outcomes, I don’t think there’s anything wrong with experimenting with, “Hey, instead of stickies today we’re gonna do it this way.” And even, you know, creating a different tool to do at that time.

Richard

“Remember the future” for impact, for example. (Peter agrees:  Yeah, uh huh.) Richard continues,  This has been successful. What things do we see in the world? Let’s describe it vividly, or “Let’s write the trade journal article about how great the outcome was.” So you can apply creative techniques to talking about these things. You can also do the pre-mortem kind of thing on the risks. Like write the article about how badly this thing crashed and burned and make it really vivid. So introducing tools in there that create more creativity and interest towards the same outcome. Not a bad idea, but be clear about what you’re doing. If the process itself is broken, layering novelty and whimsy on top of it doesn’t fix a broken process.

Peter

Those were the main questions we got and it was fun to talk through those and to dig into feature mining with you all. Please like, and share the episode if you found it useful and keep the questions coming. We love digging into topics like these, whether that’s related to deciding what to build in kind of the product management space, whether that’s around how to help teams be more effective. Whether that’s about leadership, management,or  organizations those are all things that are near and dear to our hearts. And if you have questions in any of those, please let us know and we’ll shoot an episode about it. Thanks everyone.

 

Last updated