The 10X Programmer, and other Myths
Laurent Bossavit is a myth buster. Beyond his role as consultant and Director at ‘Institut Agile’ in Paris, France, he’s the author of ‘The Leprechauns of Software Engineering’. In researching this book, he found evidence that debunks myths common among Software Engineers. In this interview, he explains how folklore can turn into fact and why the 10X Programmer, the Exponential Defect Cost Curve, and the Software Crisis may not be as real as they may seem.
*Read more interviews with some of the world’s best developers on ** or follow us on *
Bossavit was an early advocate of Extreme Programming. He even authored the first book in French on the topic. And it was whilst he was researching proof to substantiate its benefits that he first stumbled upon some of the ‘evidence’ put forward to support some popular theories in Software Engineering.
“I didn’t wake up one morning and think to myself, ‘I’m going to write a debunkers book on software engineering’”, says Bossavit. “It actually was the other way around. I was looking for empirical evidence, anything that could serve as proof for Agile practices. And while I looked at this I was also looking at evidence for other things… related to Agile practices. For instance, the economics of defects and just stuff that I was curious about like the 10X programmers thing. So, basically, because I was really immersed in the literature and I’ve always been kind of curious about things, in general, I went looking for old articles, for primary sources”.
A common tenet in Software Engineering is the idea of the ‘10X programmer’. This is the notion that there can be up to a 10 fold difference between the productivity and quality of work produced by programmers with the same amount of experience. Bossavit says, “it’s actually one that I would love to be true. If I could somehow become, or if I should find myself as a 10X programmer, maybe I would have an argument for selling myself for ten times the price of cheaper programmers”.
However, “when I looked into it, what was advanced as evidence for those claims, what I found was not really what I had expected, what you think would be the case for something people say, and what you think is supported by tens of scientific studies and research into software engineering. In fact what I found when I actually investigated all the citations that people give in support for that claim, was that in many cases the research was done on very small groups and was not extremely representative”.
The main studies are now quite dated, there’s “this whole set of evidence that was done in the seventies, on programs like Fortran or COBOL and in some cases on non-interactive programming, so systems where the program was input and you get results of the compiling the next day. The original study, the one cited as the first was actually one of those, it was designed initially not to investigate productivity differences but to investigate the difference between online and offline programming conditions”.
So that is the first problem with the claim, Bossavit says. “How much of that is still relevant today is debatable. How much do we understand the concept of productivity itself is also debatable. And also many of the papers and books that were pointed to were not properly scientific papers. They were opinion pieces or books like Peopleware, which I have a lot of respect for but it’s not exactly academic”. What’s more, recent “papers did not actually bring any original evidence in support of the notion that some programmers are 10X better than others, they were actually saying, “it is well known and supported by ‘this and that’ paper” and when I looked at that the original paper they were referencing, they were in turn saying… things like ‘everybody knows since the seventies’. So you ended up with these circles of citations, with no actual proof at the end.
“My conclusion was that the claim was not actually supported. I’m not actually coming out and saying that it’s false, because what would that mean? Some people have taken me to task for saying that all programmers are the same, and that’s obviously stupid, so I can not have been saying that. What I’ve been saying is that the data is not actually there, so we do not have any strong proof of the actual claim”.
Another ‘fact’ that Bossavit takes issue with is the Exponential Defect Cost Curve. This is the claim that if it costs one dollar to fix a bug during the requirements stage, then it will take ten times as much to fix in code, one hundred times in testing, one thousand times in production. “That one is even more clear cut”, says Bossavit. “Those are actual dollars and cents, right? So it should be the case, at some point a ledger or some kind of accounting document originates the claim. So I went looking for the books that people pointed me to”. But what he found was that “when you look at the data and you try to find what exactly was measured”, you typically found that “rather than saying we did the measurements from this or that project, books said or the articles said, ‘this is something everybody knows’, and references were ‘this or that article or book’. So I kept digging… and in many cases I was really astonished to find that at some point along the chain basically someone just made evidence up.
I could not find any solid proof that someone had measured something and came up with those fantastic costs”. “You can find some empirical data in Barry Boehm’s books and he’s often cited as the originator of the claim. But it’s much less convincing when you look at the original data than when you look at the derived citations”, Bossavit says.
There is a third claim that Bossavit has researched, this is called “The Software Crisis”. This is common in mainstream media reporting of large IT projects that highlight high failure rates in software projects, suggesting that all such projects are doomed to fail. Bossavit says that “this is a softer claim right, so there’re no hard figures, although some people try. So, one of the ways one sees software crises exemplified is by someone claiming that software bugs cost the U.S. economy so many billions, hundreds of billions of dollars per year”.
But what he found most interesting, was “the very notion of the software crisis was introduced to justify the creation of a group for researching software engineering. So the initial act was the convening of the conference on software engineering, that’s when the term was actually coined and that was back in 1968, and one of the tropes if you will, to justify the interest in the discipline was the existence of the software crisis. But today we’ve been basically living with this for over forty years and things are not going so bad”. As Bossavit puts it “when you show people a dancing bear one wonders not if the bear dances well, but that it dances at all” and “to me technology is like that. It makes amazing things possible, it doesn’t always do them very well, but it’s amazing that it does them at all. So anyway I think the crisis is very much over-exploited, very overblown, but where I really start getting into my own, getting on the firmer ground is when people try to attach numbers to that”. He found that in some cases the methodology used was that they “picked up the phone and interviewed… a very small sample of developers and asked them for their opinion, which is not credible at all”, Bossavit says.
“Some of them come about from misunderstandings”, Bossavit says. “I found out in one case for instance that an industry speaker gave a talk at a conference and apparently he was misunderstood. So people repeated what they thought they had heard and one thing led to another… So I think that was an honest mistake and it just snowballed”.
“In some cases people are just making things up… and one problem is that it takes a lot more energy to debunk a claim than it takes to just make things up. So if enough people play that little game some of that stuff is going to just sneak past. I think the software profession kind of amplifies the problem by offering fertile ground, we tend to be very fashion driven, so we enthusiastically jump onto bandwagons. That makes it easy for some people to invite others to jump, to propagate.”
“Somewhat cynically it varies between, ‘why does that matter?’ and a kind of violent denial. Oddly enough I haven’t quite figured out why, what makes people so into one viewpoint or the other and there’s a small but substantial faction of people who tell me ‘oh that’s an eye opener’ and would like to know more, but some people respond with protectiveness when they see for instance the 10X claim being attacked.”
So why does it matter? Well, Bossavit says that “claims which are easy to remember, easy to trot out, they act as curiosity stoppers, basically. So they prevent us from learning further and trying to get at the reality, at what actually goes on in the software development project, what determines whether a software project is a success or failure, and I think that we should actually find answers to these questions.”
*Read more interviews with some of the world’s best developers on ** or follow us on *.