On Science and Software Development

Dave Cridland - Nov 28 '17 - - Dev Community

I had an epiphany last night.

So proud was I, that I chatted with our lead tester this morning, and found he'd already had the same thought a couple of years ago. Oh well, I've never claimed to be original. Here's Jon's thought that I, also, had two years later.

So I was thinking about the nature of Science (and, more particularly, the scientific method), and about the nature of Computer Science, and it struck me - Computer Science isn't Science at all.

Science is about extending our knowledge of existing systems. Astrophysics, for example, gives us an ever-increasing insight into the mechanics of stars and galaxies, of black holes and dark matter. It's amazing stuff, it really is. The key to science is that it allows us to answer questions - how does a star form, what holds the galaxies together, and so on.

Computer Science is about devising new algorithms, or analysis of existing ones. The questions it answers aren't about understanding the unknown, they're about learning how to do something. Computer Science focuses on algorithms, proofs, and so on.

So the relationship that Computer Science has on development is rather like the relationship that Mathematics has with Physics. Mathematics provides the tools with which to understand the universe, and not the understanding itself. So one could argue that Computer Science, though clearly a very valuable discipline, isn't actually Science at all. It's Computer Mathematics.

So where is the Science in software? Is there any? Well - yes. And I don't think it's something we study enough.

Given any piece of software, there are two important questions we should ask about it:

  • Does it work?
  • Why not?

If you're amazingly lucky, you don't have to answer the second - but when you need to answer it, you really need to answer it. These two questions form the disciplines of Testing and Debugging.

In the Scientific Method, popularised by Sir Francis Bacon a few centuries ago, there's a standard way of answering such questions.

  • You guess - "Hypothesis"
  • And then you figure what would prove, or disprove, your guess - "Prediction"
  • And then you do that - "Experiment"
  • Rinse, repeat, until your guess was right

So with Software Testing, we start with the hypothesis that the software works. Probably, more usefully, we break that down into a set of ways that it would work - some of these from a specification or requirements set or something, and many many others from common sense.

With debugging, it's more useful to be a little subtle. So the bug affects user login. We've checked the password is correct, so it's not that. Maybe the expiry check it wrong? If so, that would mean... And so we try...

Debugging the most poorly taught and discussed skill in software development. I don't think Testing is well taught either. In both cases, I think we focus too much on the mechanics of what tools to use (the debugger, the unit test) and too little on the method for deciding what to test and how to debug. To put it another way, we concentrate on the experiment, and not the hypothesis or prediction phases.

And this is batshit crazy - debugging is the single most important skill a developer can have. We should be working to create a solid, reusable approach to debugging instead of relying on gut feeling and ad-hoc approaches.

Without solid testing, whatever we ship is an unknown, volatile thing (well, unless you formally test the entire thing, which in some cases is sufficient). Developers tend to bias toward the types of experiment they understand, and condense testing into a single procedure and set of metrics. Code coverage, unit testing, and so on become the favoured approach. At the other end of the scale, Exploratory Testing, once the primary approach, is looked down on for being too soft and lacking rigour - it's considered to be based at best on the gut feeling we prize so highly when debugging. Yet what's important is that the correct hypotheses have been formed, and tested appropriately.

This way of thinking also raises some more troubling thoughts. Test Driven Development is a popular approach to development that brings some clear benefits. But if I described this as conducting your experiment to fit your hypothesis, it suddenly doesn't sound as good. I don't, actually, think it's quite like that, although I know have niggling concerns with the approach I didn't before. Could it be that the "tests" we write for TDD are not, quite, tests? Should we reconsider them later in the process?

I think there's a good case for applying more scientific rigour to both debugging and testing. I think making these two areas more structured in their overall approach would make them both easier to learn, and more beneficial to the overall project. And also, I can finally get to describe myself as a Scientist.

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Terabox Video Player