Measuring the Success of Agile Adoption

If you want to measure your agile adoption, start by getting clear on the goals for that agile adoption in the first place. Remember, different stakeholders probably value human, business, and process benefits differently. Since the best data will take a long time to get, measure leading indicators until you get the better data.

Take big industry surveys on the success of Agile with a grain of salt. Even when the research methodology is good (which is rare), how they define key concepts like ‘success’ and ‘Agile’ may be very different from the ‘Agile’ you’re considering and the ‘success’ you’re hoping for.

In this episode, Richard and Peter take on two related questions: “Can those Agile success surveys be believed?” and “How should we think about measuring the success of our own Agile adoption?”

Episode Transcription

Richard Lawrence

Welcome to the Humanizing Work Mailbag, where we answer questions from the Humanizing Work Community!

Peter Green

If you have a question you’ve been noodling on, email us at mailbag@humanizingwork.com, and we’ll see if we’ve got a good answer for you.

Before we jump into today’s episode, just a quick reminder to rate and review the Humanizing Work Show in your podcast app, or if you’re following us on YouTube, please subscribe, like, and share today’s episode if you find it valuable to you. Your help spreading the word is super meaningful to us!

Richard

And onto this episode, a client reached out to us to ask about Agile success surveys. He was curious: Can you trust the statistics? How should we even think about success and failure and our chances for success? This client linked to a web page that was a summary of various surveys on the topic as an example.  Here’s our answer to how to think about Agile success rates and surveys and such.

I’m deeply skeptical of any of these studies. There are too many variables and too many slippery concepts to do anything like a clean study that could draw rigorous conclusions. Just on the surface, how the researchers define “Agile,” “waterfall,” and “success” are going to skew the results at best and beg the question at worst. For example, if you define “success” the way the Standish group does (which is often the standard used by these studies), let’s see what they say.

They say a successful project is one that meets the assumed schedule, cost and scope.  A challenged project meets two out of the three constraints, and schedule, cost or scope is not met; and a failed project is one that is canceled before it’s completed or is completed but not used.  So, if you use that definition of success, you end up really measuring how accurately you predicted the future– not whether your initiative achieved meaningful business results and/or had positive long-term effects for the organization, let alone the people involved.

And that’s just the most basic question of “what’s success?” Try to be any more nuanced in your study, and it gets even worse.

You can believe some of the more trivial conclusions in these studies. For example, do a large majority of companies practice something today that they call “Agile?” Almost certainly. Are they talking about the same thing? Almost certainly not.

When people ask us questions about these studies, usually they’re really asking, “What are the chances we can expect to be successful with an Agile approach?” That’s a question more like, “What are the chances I’ll be successful with such and such a diet or workout program?”

For questions like that, even the best studies can only give you a sense of whether anyone has been successful. This can be good inspiration to start your own experiment and it can caution you about which experiments to avoid. But it’s important to dig further. What made the successful examples successful? What made the failures fail? Do I have the attributes that correlate to success or to failure?

Then, if that research causes you to hypothesize that you have a chance at being successful, it’s an experimental problem because you’re in the complex domain. So, try something in a small, safe way to see if your hypothesis is valid, which typically means some kind of pilot team. And in a previous episode, we talked about how to choose a pilot team if you want to do this kind of experiment, and we’ll link to that episode in the show notes.

Peter, I know you did some work to try to measure the success of Agile adoption when you were at Adobe. Why don’t you tell us a little bit about that?

Peter

Yeah, in fact the pattern you just described for how to get into it, where we pick a pilot team and then expand from there, apparently I was following your advice even 17 years ago, Richard, because that’s what we did.

The team that I was program manager for adopted Scrum and found some success with it, and then other teams started to get interested in it, and then Scrum and Agile approaches started to spread in the organization around that time.  A few years into that I was curious whether all the teams that I had worked with and trained were actually using the things that I had taught them and whether it was working for them.  I didn’t want to teach things that weren’t actually working in a broad context.  And so I started sending out a survey.  The survey had a few different components. One component was measuring what parts of Scrum they were actually using. There was a little bit in that around more general Agile practices outside of Scrum as well; and the second component of the survey was whether the individuals responding felt like it was working well for them. Were these Agile techniques serving them and their customers and teams?

I ended up publishing a lot of this data in an article in the IEEE Journal of Systems Sciences: Adobe_HICSS44. One of the things that was interesting about that is that even publishing that data and surveying about it, I realized that data about whether Scrum is working is not that influential.  Just measuring if people were using it and liking it was not convincing outside of the people who understood what Scrum was.

So the other thing that I did at Adobe was to start looking for things that teams were measuring before they adopted Scrum, which I believed should improve after adopting it. At Adobe I found three examples of that:  One of the things that almost every team at Adobe was tracking was defect rates—how many bugs are we finding and fixing in the software– and my hypothesis was that that should get a lot better:  that we should see fewer open defects. That turned out to be true, and I was able to publish some of that data. Another one is that there were a few teams that were tracking customer Net Promoter Score for their products before they adopted Scrum and continuing to do it after they adopted Scrum.  Not every team that adopted Scrum were doing this, but a few of them were, and it was interesting to note that all of the teams that were tracking it saw a marked improvement in their Net Promoter Score after adopting Scrum, and at least in a couple of cases they were significant differences.  The third thing was less quantitative and more qualitative:  The idea of how much overtime you work at the end of a release when you’re trying to get the thing out the door. Since Adobe’s products were mostly being built by full time employees and contractors, we weren’t really tracking things like overtime, but I do remember, toward the end of a cycle when our team was out having a leisurely lunch, looking at another team that I knew was working 60–80-hour weeks to get something out the door. That team was one of the first ones to adopt Scrum after ours, because they were saying “How is that team not working crazy hours right now?”

So, looking for things like that was super useful to us.

Going back to this idea of “What are we actually measuring?” that you were mentioning before, Richard, when you and I both ask people how they measure the success of Agile adoptions, we usually hear answers that highlight one of three categories of a goal or outcome for Agile.  The first one, and we hear this from a ton of Agile coaches, is that Agile is successful if employees or team members are more engaged; if they’re finding more meaning in their work; if they’re collaborating more effectively.  Kind of a human focus from that one.  Other people that we talk to mention that Agile is successful if the business is getting positive benefits. We’re more successful at whatever our business does.  And the third one is an interesting one: We’re successful if we’re doing it. It’s kind of a process and structure measure. Often, at the root of that is “We want more predictability.  We want more consistency across teams.”

I think that in different ways all three of these are important.  And most of us have a bias toward one or two of the three and against one or two of the three, as being important. So when I think about measuring effectiveness, I think it’s probably worth examining our own blind spots, to which of those categories kind of bugs me a little bit, and I don’t think it matters that much– because there are probably stake holders who do care about that, and if I overlook it or even talk down on it, that’s likely to have a negative impact on how effectively we can adopt these types  of techniques.

The other factor I would consider is that things like Net Promoter Score are really convincing. “NPS went up 30 points after we adopted Scrum.” That’s pretty convincing, but it’s also a really lagging indicator.

So the other thing I think is important to do is to look for the leading indicators and start measuring those until you can get the better data.

Richard

Yeah, and for that I think you have to see enough teams to start knowing what’s correlated with the positive lagging indicators.  We have done some work capturing sets of habits that we have seen on successful Agile teams and how those roll up to coherent strategies to produce particular outcomes like actually delivering anything reliably or delivering value and interacting with and understanding customers and that sort of thing.  And, as we look for the practices or structural factors like team composition– if you have a fully cross-function team, that’s an early indicator that the right people are on the team. You have a better chance of success.  Or we see habits like vertical slicing and frequently getting to Done. That’s a clue that you’re probably delivering more value and seeing more feedback. This will probably lead to those lagging indicators.  Or behaviors like interacting directly with customers and explicitly forming hypotheses to test, that suggest you’re going to be iterating your way toward more customer value. So once you’ve seen hundreds of teams like we have, you start to see certain structural factors, behaviors, practices, and habits correlate to the positive outcomes; and we start watching for those on teams.  It’s not definitive; you can have a team that has a lot of those things that doesn’t produce good outcomes, and you can have teams that produce good outcome without some of those factors. Correlation is not causation but there are definitely some strong patterns in the correlation there.

Peter

So, to summarize:

If you want to measure your Agile adoption, start by getting clear on the goals for that Agile adoption in the first place. Remember, different stakeholders probably value human, business, and process benefits differently. Since the best data will take a long time to get, look for leading indicators you can measure until you get the better data.”

Richard

To get back to the original question, take those big industry surveys on the success of Agile with a grain of salt. Even when the research methodology is good (which is rare), how they define key concepts like “success” and “Agile” may be very different from the “Agile” you’re considering and the “success” you’re hoping for.

Peter

We hope you found this answer helpful. Let us know, in the comments, if you’re watching on YouTube, what ways you’ve measured success, and any other ideas related to this topic.

Richard

Please like and share this episode if you found it useful and keep the questions coming! We love digging into tough topics, so send them our way.

 

 

Last updated