by Andy Hunt, June 1, 2015
In my last article questioning the state of the agile movement, I asked “What happened to the idea of inspect and adapt? What happened to the idea of introducing new practices, of evolving our practices to suit the challenges at hand?”
According to a 2014 survey on agile development by VersionONE, 56% of the teams they surveyed use Scrum, some 10% use a Scrum/XP mix, and 8% are using a mixture of agile methods (including XP, Kanban, Lean, etc.). To me, that says that maybe 18% of the survey respondents are possibly doing the right thing. The rest, perhaps, are merely doing a stand-up meeting and calling it “agile.”
Okay, perhaps that’s overly cynical of me, but another question on that same survey shows only 26% of the teams are set up for continuous deployment. I think that’s a telling statistic. It’s one thing to have a build machine running in the background and call it “continuous integration,” but you actually have to know what you’re doing to take that to the next step and be prepared for continuous deployment. I consider that a hallmark of an actually effective, agile team. It doesn’t matter if your design is awesome and you’ve figured out the customer’s needs if you can’t then actually get software out the door consistently, reliably, and automatically.
And look at that 8% figure. I think those are the teams that might be truly effective, and are able to embrace change. They aren’t fussed over whether a practice is canonical Scrum, XP, Lean, or whatever. They are mixing and matching what works for them. That is what it means to be agile, and that isn’t some thing you achieve by merely following static Scrum or XP practices.
The problem, I think, is that while agile methods invite you to inspect and and adapt as necessary, that idea isn’t represented as a first-class part of any method. There is no Scrum practice whose purpose is to create new practices. The same is true of Extreme Programming. The heart of becoming an agile practitioner, as Dave Thomas wisely points out in this blog post http://pragdave.me/blog/2014/03/04/time-to-kill-agile, is to:
Easy to understand, but perhaps much harder to actually do. Just because you read the driver’s ed manual (an agile book) and maybe even got your driver’s license (or some silly certification) doesn’t mean you’re ready for Formula One racing, Le Mans, or rush hour in Manhattan. And that’s a big problem with agile adoption, as Jared points out in this post http://www.agileartisans.com/main/blog/218.
So I’d like to try an experiment. I’d like to propose a method that features an Experiment as a first-class feature of the method. Beginning with adoption and last through your own unique local evolution, you can be guided by actual outcomes under your own actual conditions from experiments (beginners start off with very concrete, unambiguous steps/ checklists, and move away from those as skill progresses).
Rather than relying on what might have worked somewhere else with different people, you can inspect and adapt in the proper context: yours.
You can use experiments to take back control. Let’s see how this might work.
In general, folks don’t like change. There’s a saying that only wet babies like change, but even then there’s a lot of crying and wailing involved. So really, no one wants change.
What folks want is new and different results. Preferably for free, without having to change anything significant. But most of all, folks really don’t like to be changed. For any chance of success at all, change needs to come from one’s own desire. It’s like the old joke where the lightbulb has to want to change.
That means that people need to be shown a very personal, direct upside for them. Abstract notions such as “higher quality code” or “improve our time to market” just aren’t compelling. As a developer, I probably don’t care about these issues; any benefit to myself is indirect at best. As a customer support rep, I may have much more of an interest in code quality, because that affects me. As a VP of sales, time to market may be critical to me. I may not give a rat’s hat about “code quality.” So for me to buy in to some new approach, and it requires me to do something differently, then I need to see how it specifically benefits me. And then maybe I’ll consider trying this new, wacky thing you’re proposing.
The GROWS idea of better adoption goes something like this:
First, everything is paced according to your skill level. Since we’re talking about adoption here, then everyone is a novice at GROWS—they have no experience with it yet. As a novice, there are some practices you want to try, and there are practices that GROWS says you need to start with.
In GROWS, you adopt a practice by running an experiment. Each practice comes with its own experiment, which helps you identify the conditions for the experiment, the feedback to look for and how to evaluate it. For novices, the feedback and evaluation are very concrete and unambiguous, with no judgment required. That part comes later.
Experiments are time-boxed, which limits commitment and risk, unlike the more amorphous “change,” which is permanent and open-ended. It’s very clear all involved that you aren’t yet adopting this practice or committing to it. You’re just going to give it a try.
Everyone participates in the experiment and in evaluating the outcome, which gives the participants a chance to “put their own egg in,” as the saying goes. (When Betty Crocker first came out with an instant cake mix, it was a failure. All you had to do was add water to to the mix. They changed the formula so you had to add an fresh egg and water, and now consumers felt like cooks again. That version was a success: level of participation makes a difference!)
At the early stages of adoption, the recommended practices are oriented toward practical “hygiene” and safety, and are not controversial, abstract, or rely on delayed gratification. These first practices are immediate and useful, and chiefly serve to introduce the idea that you can try stuff before committing to it, and build and deploy reliably. This approach establishes a baseline environment where you have control over adopting practices, modifying them, or rejecting them—and are even encouraged to do so. (Oh, and as you might guess, the recommended practices themselves change as the team and organization gain skill and grow; it’s not static).
We want to help teach people and get them used to the idea of expecting and acting on feedback, with a short feedback loop. But that’s the easy part.
The harder part comes in evaluating feedback. At these early stages, GROWS specifies very concrete, unambiguous feedback to look for. Later on, at higher skill stages, that process gets a lot more interesting, and a lot more difficult. But by then the team and other participants have built up trust and a familiar habit in seeking and applying feedback, which makes it easier to work together when the harder issues come up.
The idea of GROWS is, itself, an experiment. Any or all of these ideas may not work in any particular context. And that actually is the whole point. We have tried some of this and are trying more and more. We’ll be publishing more information on the method itself, the practices and the experiments shortly.
We’re gathering feedback, evaluating the results, and adjusting. We hope you’ll join us at growsmethod.com.
“Do not be too timid and squeamish about your actions. All life is an experiment. The more experiments you make the better.” — Ralph Waldo Emerson
— Andy Hunt