Wednesday, December 1, 2010

Scrum Alliance vs. Scrum.org vs. Scrum.

I just finished reading the following article: http://borisgloger.com/en/2010/11/29/scrum-alliance-and-scrum-org-yesterdays-concepts-do-not-solve-todays-problems/ I was going to comment there, and then decided it was really worth its own blog post.

Full disclosure: I am a CSM and a CSPO, and (while not a CST) have co-taught CSM courses in the past. I firmly believe that good education is essential to getting started well with Scrum, and that having some way to identify baseline knowledge and expertise in a field is a boon both to knowledge workers and to potential employers.

I believe that we need a crowd-sourced, community-based Scrum education/certification organization.

Some meaningful alternative to the Scrum Alliance's strategies and methods has been needed for a long time. I had hopes when I first read about the split between the Scrum Alliance and Scrum.org that Scrum.org could be that alternative, but this is plainly not the case.

We have seen, in venues ranging from YouTube to Wikipedia to OkCupid, the power of crowd-sourcing knowledge and creativity. Let's brainstorm on what a crowd-sourced Scrum organization might look like:

1) It needs to have *no* financial interest in owning its ideas, teaching its ideas, etc. The profit motive has arguably distorted the judgment of the professionals at the Scrum Alliance, and seems highly likely to distort the judgment of the professionals at Scrum.org in the same ways.

2) It needs to grow, adapt, and change over time. It needs to take feedback from its users, alter with the increasing knowledge of the community, and always be focused on inspecting and adapting to improve itself.

3) It needs to be small, lightweight, and agile (not "Agile"). We don't need a huge organization with pages of bylaws and a board of directors, we need a website, a few programmers, and a bunch of Scrum practitioners sharing their knowledge to improve the way everyone builds software.

There are already wikis out there that capture and share crowd-sourced knowledge about different aspects of Agile (e.g. http://agileretrospectivewiki.org/ ). Without re-inventing the wheel, how could such a crowd-sourced Scrum organization draw from the community as a mass of creative, experienced individuals, to give back to the community in the form of useful, free educational resources, or a meaningful certification process?

I'm much enamored of the OkCupid model, which combines mathematical analysis with user-provided data-gathering mechanisms. Something as simple as a correlational model which accepts questions from the community, correlates individuals' answers to them with those same individuals' Scrum experience (number of years doing Agile, number of years doing Scrum, number of Scrum projects, number of successful Scrum projects), and then allows community members to take a selection of the questions and receive a "Scrum Experience Score", might be all that is needed.

Thoughts?

Saturday, October 9, 2010

The Brick-Layer and the Doctor

There is a long-standing belief among the planners and architects of the software world that coding is like the work of a brick-layer. Misled, perhaps, by the fact that the elements of computer languages are few and identical, they model the writing of software as a simple, linear, step-by-step process where the same fundamental operations are repeated over and over again. Slap down some mortar, lay a brick, true up the edges, lather, rinse, repeat.

They could not be more wrong.

Why do I say that with such confidence?

Because anything which is actually simple, linear, and step-by-step gets automated. Not all at once, admittedly--it took us more than a decade to get from the first general-purpose electronic computer (ENIAC, 1946) to the first object-oriented programming language (Sketchpad, 1960).

Stop and think about that. From plugging and unplugging a large web of electrical cables to writing object-oriented code...in just 14 years.

It is not possible for competent programmers to be brick-layers. Competent programmers stop after laying down a couple of rows of bricks, say "A machine could do this!" and proceed to write one that does.

What, then, is the role of a programmer?

The world presents us with a complex, ill-defined, fuzzy set of problems. Most of them overlap. Some of them have a common cause. Some of them look very much like other problems with completely different causes. Millions of dollars -- even lives -- hang in the balance.

Programmers are doctors. Businesspeople present us with a problem: "I want a billing system." "This old system is too slow and it keeps crashing when we try to run payroll." "Why can't it just work the way I want it to?!?" Our work starts with diagnosis, and continues through selecting an appropriate treatment, tracking our patient's progress to make sure they comply with it, and following up with additional treatments if the first one doesn't work.

Over the next couple of weeks, I'll be writing a series of articles based in this metaphor. My goal is to help programmers and managers alike break free from the misconception that programming is unskilled, repetitive labor, that programmers are interchangeable "resources", and that the best way to write good code is to follow strict recipes with rigor and precision.

The world needs more good code. I want to help you write it.

Next...Diagnosing Sick Code.

Wednesday, August 11, 2010

Another excellent quote from the same source...

I propose that the real issue is that design is not really a beneficial activity in software development, and to say "The Source Code Is The Design" is trying to use semantics to gloss over the issue.

I feel this is an important distinction if the goal is to remove the "design" stage from the software development process. Rather than being afraid of being accused of "not doing design", we need to turn the debate around to be "Why should we do design?"

For some tasks, it may be much more cost effective to create a design and evaluate the design before building the actual product. For software, this is not the case. For years, software has struggled to come up with something to use for "design." We had flow charts, PDL, Data Flow Diagrams, prose descriptions, and now UML. With software, however, it takes as much time to create the design as the actual software; the design is more difficult to validate than the actual software; and the simplifying assumptions made in the design are often the critical issues to evaluate in the software. For these reasons, it simply is not cost effective to design, iteratively correct the design, then write the software (and then iteratively correct the software). It is better to start with the software and iteratively correct it.

I believe it is time to explicitly state the long held secret of software, we do not need to do design; design is ineffective and costly. -- WayneMack

From http://c2.com/cgi/wiki?TheSourceCodeIsTheDesign

Best quote ever on software specifications.

Walden Mathews, talking about specifications documents and functional specs and the like, says:

"Specification" is interesting nomenclature, because it's really the lack of specific-ness that keeps it from being the product.

From: http://c2.com/cgi/wiki?TheSourceCodeIsTheProduct, which I got to from http://c2.com/cgi/wiki?TheSourceCodeIsTheDesign.

He's absolutely right. The source code is the first thing specific enough to actually constitute the system—and in a well-written object-oriented system in a high-level language, it’s often just as readable as (if not more readable than) the original “specifications” document.

Monday, May 24, 2010

Acceptance Testing and the Testing Pyramid

For the past couple of months, I've been working with a client who is seeking to get the best value they can out of their testing automation efforts. One of the big opportunities I've seen for them to increase the value of their tests is to adhere to the "Testing Pyramid" -- the idea that you should have lots of unit tests, fewer integration tests, even fewer functional tests, and very few acceptance tests at the top of the pyramid.

But in the process of working this out with them, I've been noticing that how people define acceptance tests varies widely. And then I came across this interesting review of an article by Jim Shore on whether to automate acceptance tests at all:

http://www.infoq.com/news/2010/04/dont-automate-acceptance-tests

(the original article is here: http://jamesshore.com/Blog/The-Problems-With-Acceptance-Testing.html )

Having read the original article, I don't think the reviewer quite fairly represents Shore's position. Shore is not so much arguing against automating acceptance tests, as arguing that automating acceptance tests, by itself, doesn't buy the business value that he once thought it did.

Me? I think that the key value lies in defining "acceptance tests" as literally that--the automated reification of the Product Owner's Acceptance Criteria.

As any Agile developer knows, the five-to-seven Acceptance Criteria on a particular story represent only a fraction of the expected functionality--but they represent, ideally at least, the core business value and core expectation of the PO. By automating those expectations in a "visible execution" testing tool like Selenium, we gain the dual benefits of both securing those core expectations against regressions, and creating a built-in "known working demo" for the customer of the functionality the customer most desires. But what about the rest of the functionality, the part that the delivery team fills in, that isn't explicitly mandated by the acceptance criteria?

That's where you can (and should) drop down a level, into functional testing--headless browsers like HtmlUnit for web applications, and behind-the-UI testing tools like FIT for desktop applications. Less brittle than automated acceptance tests and considerably faster-running, they pay for these features by being opaque to the PO. But we've already written the PO's core expectations in a visible, user-comprehensible form.

The problem comes when people take "acceptance tests" to mean "system tests done through the UI", and then attempt to test their entire application via this type of test. I think that's what Shore is getting at when he says "plus a full set of business-facing TDD tests derived from the example-heavy design". I've worked on projects like this, when the legacy technology we were using (ColdFusion) gave us literally no entry points between the web page and the database. Yes, we were able to build a good application this way--but by the time we were done, the entire test suite took over seven hours to run, and we were only actually executing the "most relevant" roughly 10% of the test suite until the end of each Sprint. That's not really a recipe for good iterative development or TDD.

As long as "acceptance tests" are kept at the level (and scope) of the acceptance criteria, they become a powerful tool both for communicating with the PO and for securing the PO's core business value against regressions. Then, testers and developers are free to use all the fast, efficient, lower-level tests they want to provide test-driven design and a refactoring safety net.