I woke up late that morning, drank my coffee and thought about problems in the design and development process on various projects most of the day. Yuck. So at the end of the day, I pulled the post, resolved to focus on measurement anyway but recognized that I'm not going to be able to leave design process methodologies in 2009 after all.
Here's the issue: I see process problems within various projects and teams all around me. Some of those problems seem to be related to a misuse/misapplication of the Agile methodology. On the flip side, some of the great things that are occurring are related to the successful use of the very same concepts. I keep looking for the common and missing threads across these projects, but it's a bigger data set than I expected. I thought (hoped) is was people related because if I narrowed it down to a few individuals that had it wrong, that would have been the easiest answer. Then I considered schedule, management, project size, project complexity, commitment to process or lack thereof. And the answer was that it could be any or all of the above, depending on the specific case. So no easy answers and no leaving the questions behind.
I do see one common thread and it's similar to the primary topic I want to address in the measurement tools discussions. Sometimes we all use a good tool in the wrong way. Or we use a good tool for the wrong problem. I'm not alone in my distractibility: the IT industry as a whole suffers from adhd. We all tend to jump into the latest new tool/methodology, decide that one tool is the holy grail of computing and use it/abuse it to excess, applying it to every project and product regardless of its suitability for such application. Then the next new thing comes along, and we leave the last grail behind, failing to carry forward the lessons learned and the components of the tool that were actually beneficial.
Any good tool applied in the wrong way will fail to perform as expected. The real drawback is that someone's choice to use the tool incorrectly sometimes tarnishes the reputation of a very good tool. I think this is what has happened with Agile: it's been applied inappropriately in some cases and ineffectively, without proper planning and thought, in others. This can also happen when we chose our tools to measure performance: use a good tool for the wrong kind of system or problem, and the results will be less than hoped for. This doesn't mean we should stop experimenting with new tools and new uses of old tools, but we need to keep our eyes wide open as we do so.
I had an interesting discussion with a systems engineer recently. He was somewhat dismayed at the prospect of having to demonstrate and prove his conclusions to our customer in advance. He had researched and documented his findings, and wanted moved on to implementation without having to prove what he saw as obvious. According to him, it was the research, documentation and attention to detail that made one an engineer. I disagreed. Engineers do tend to research, analyze and document in depth, but every engineer I've ever met cannot resist experimenting and trying to use a tool in some new unheard of way, just to see what will happen. And if it works, they will continue to use it even if the tool wasn't originally designed for that purpose.
Engineers will also break things just to see if it can be done and what will happen. Testing to destruction is a game for them, as I learned back in 1997 when my client-server database had 150 engineer clients. Learning to break a tool is highly educational, as is learning how to prevent that break. My solution to stop the engineers from breaking the client workstations was a cabinet full of pre-imaged drives. If they broke their workstation software, I pulled the old drive out, bolted the new one in and took the old one with me so I could re-image it and stick it in the cabinet for the next failure. That minimized the tampering a bit. It didn't stop it entirely, but the guys made sure any files they needed on that drive were backed up, or they made sure they knew how to undo the changes they made. And I learned how to minimize the impact of their testing on my time. The point is I've never met an engineer that didn't prove his assertions. We're usually smart enough after a few 'incidents' to control our experiments and conduct them is a safer place, but that doesn't stop the playing. Besides engineers don't like being wrong, so they do the proofs even if it is just for themselves.
Back to the blog post link, which was another one I received from Cary. It's a good, thought provoking post, and it mentions some of the shortcomings I see in the application of Agile methodologies. If you take a look at the section in which the author, Jason Cohen, describes the 'general line of reasoning' behind Agile, pay attention to his list of 'therefores'. I see a few that I don't think should be managed via Agile. For example, do you really want to build your architecture iteratively, based on customer feedback? By the time the customer is giving you feedback on an inadequate architecture, you've already got serious problems. That's just one example, but maybe if we stop using Agile as a general purpose tool and start using it where it is most appropriate, i.e. the application function and interface, we'd get all the positive benefits from Agile without the drawbacks.