#4: How to Slay a Werewolf

In the first three episodes of the podcast I described my journey in becoming the “Grand Geek”.  Before continuing the story I will reflect on some of the lessons I learnt in the earlier period of my software engineering life.


Looking back at those years now, I can say that I worked with, and helped pioneer, some of the key building blocks of what we now call “digital transformation” or the “4th Industrial Revolution” (4IR). These were:

  • Object-Oriented Design – or OOD
  • The concept of a ‘digital twin’, and
  • Robots and how to combine them with AI to do useful tasks.

I will come back to all of these in future episodes and future seasons of the podcast, but I will go through them briefly again to see how these relate to the work that did in those early years in software engineering.


When I worked at the University of Manchester Institute of Science and Technology (UMIST) in the early 1980’s I realized that we needed special ‘dialects’ of programming languages. Programming languages are how the application developer – the programmer – explains to the computer what needs to be done. A language like FORTRAN or Pascal was fine for having a broad general conversation with the computer, but as soon as you started communicating about something more complicated you needed some new words in your vocabulary. For example, if you’re writing a program for digital control systems (as we were at UMIST) you needed a language that had words for concepts like vectors and matrices. I therefore invented a new programing language – actually a special dialect of Pascal –  called PLASMA. I now realize that what I was doing was OOD. To make it work at that time – in the early 1980s – I needed to write a hugely complex preprocessor. What the modern programmer would do now is use her favourite modern OO language (like Java or C++ or Python) to create classes called Vector and Matrix – a really simple task!  What this really teaches us is how to deal with complexity.  Software applications are complicated. It’s been said that some of the most complicated artifacts ever built by humans are very large software systems. OO is a way of developing abstract concepts that allow us to focus on the “big picture” rather than getting caught up in minute details.

Working at GEC Marconi research labs on flexible robotic assembly I did a lot more with OO – then using a real OO language (called SmallTalk). I invented really high level abstractions called “smart products”, which helped tremendously in the work we did.

It is only because of OO that today’s developers are able to write the software that lies at the heart of 4IR. It’s important, however, that developers really understand all of the OO concepts – things like encapsulation, inheritance, polymorphism, interfaces, etc. If developers are reading this, and they have no idea what these words mean they should go and find out.


Consider something that is potentially complex in the “analogue” world. What about a car? How would you model it in a computer?

It depends on what aspect of the car you want to model…

…. and that’s the key question! We could model how the car moves, how it accelerates, how it stops, etc by using Newton’s Laws of Motion. We could model how the engine burns fuel. We could model how the air-conditioner works or how the airbags behave in a crash. We could build either simple or complex models for these aspects of the car. In each case the model is represented by a program in a computer. It mimics something in the physical world. We call these computer models “digital twins”. Having these twins gives us a cheap and very easy way to simulate how the car behaves in certain circumstances. We could give our digital car new types of tyres or a new engine and analyse how this would affect the performance of the car without moving from our seat in front of the computer. This makes it very cheap and very quick.

Working at Imperial College I learnt a lot about digital twins working on large and complex econometric models. Working on electrification with my program CART I built another type of model (this time simulating the behaviour of a power grid). The biggest lesson I learnt was that all models are approximations! They can be very useful, but if one is developing and using a model you should always be aware of its limitations. You should always understand what the digital twin can’t do!

In modern applications – those associated with 4IR – a digital twin is often connected to its analogue sibling in real-time via sensors and actuators which gives the illusion of a higher level of accurate duplication. It has become even more important to understand limitations. We’ll visit this issue in a future podcast episode.


Working at the GEC-Marconi Research Centre Lab I understood just how amazing humans are at doing certain tasks – even fairly mundane tasks. At the Lab e were working on flexible assembly. At that time industrial robots were used in what we called “hard automation”. Think of a production line assembling cars. As the car moves along the line, parts are added step by step. In Henry Ford’s production line human workers added each part. By the 1980’s assembly robots had replaced some of these humans. This was called “hard automation” because these types of production lines were carefully set up to produce one type of product in large quantities. The scenario we were researching at our research Lab was very different.

The problem we were solving said: “Here’s a large box of parts and some detailed instructions. Build it.” A human can do this. The complexity of the product the human can build would be determined by her level of skills, but humans are really good at this type of task. Even if the detailed instructions are not that clear, we figure it out. Humans generally find these types of task quite easy. In the 1980’s even the most sophisticated robots – which we had – and state-of-the-art AI – which we had, found this to be an incredibly difficult task. Even today, with all of the amazing inventions and developments in computer vision, AI, ML, etc, this task would be very challenging for a robot.

The lesson I learnt, and one that is still valid today, is that replacing humans with robots – even if it’s something we should even be thinking of doing – is never going to be easy. I personally believe that we should always focus on understanding what robots do well and what people do well. We should then work on ways to help humans and robots work together.


What I have done, so far, is to summarize some of the major lessons I learnt in my early days as a software engineer. The focus of this episode is to explain what “Software Engineering” actually is. Let me start by defining the term “Software Engineering”.

Answering questions about the nature of Software Engineering is one of my favourite activities!  After I became a full professor at Wits in 2000 I was required to present a public lecture. I spoke about my discipline, Software Engineering. I called my lecture “How to slay a werewolf”. So … let me tell you how to slay a werewolf.

Before I do so, however, let me answer the question: What is software engineering? In the late 1960’s computers had been around for 15 to 20 years, and a huge problem was emerging. Bear in mind computers were still the huge mainframes that filled whole rooms that I described in Episode 1. Many projects to develop relatively large software applications were running late and over budget. In many cases they didn’t work properly. In other words they were of very poor quality. People started to call this the “software crisis”. In 1968 and 1969 NATO sponsored two conferences in Gramisch in Germany to discuss ways of solving the software crisis. These conferences gave birth to the discipline of Software Engineering. A software engineer was defined by the IEEE as “someone who applies a systematic disciplined approach to the development of software”.

One of the biggest challenges for the software engineer is to deal with complexity. Serious software is BIG. How big? For example, the LINUX kernel in the year 2000 was made up of 4.1 million lines of code. To read this aloud – non-stop – would take 590 days! At the same time Microsoft Windows NT was over 40 million lines of code. I guess that the message here is that to do anything useful with software requires 10’s of thousands of lines of code to be written – unless we reuse existing code from libraries and frameworks.  Big software applications also have extremely large numbers of states which means that it can’t be tested exhaustively. It is, therefore, essential to do as the definition of software engineering suggests: “apply a systematic disciplined approach to the development of software”.


Before I deal further with the issue of werewolves, we need a definition. What is a werewolf?

Firstly it must be said that they are legendary creatures (or are they?). Most of the time they are to be found living among us in the form of a man (in legends they usually are men, not women) . And then – when the moon is full – they transform .  They turn into vicious creatures – half man, half wolf.

In medieval times people lived in terror at the thought of there being werewolves among them. And then, in Hollywood movies in the 1940’s, scriptwriters invented a way to slay (Hollywood speak for “kill”) a werewolf. We were told that you have to shoot him through the heart with a silver bullet! So that is how you slay a werewolf.

But what has this to do with software engineering?

When a software engineer looks at a big development project, what other type of project is it similar to? Is it the same as building a large skyscraper? Or is it similar to landscaping and maintaining a garden? Fred Brooks, one of the fathers of software engineering, made a very astonishing statement in a paper he wrote in 1987. He said that a software project was like a werewolf! He says that often a software project seems to be going along really well – no issues, no problems, just another happy project ticking along – when suddenly it transforms! It becomes a huge scary monster full of errors, running late and costing way more than anyone expected. When this happens the software engineering starts rushing around looking for a silver bullet. If the full moon transforms the friendly man next door into a frightening werewolf, what transforms a happy project into a scary monster? The answer: Insufficient information; Insufficient time; Inadequate testing; Inadequate skills.


The KEY question in Software Engineering is How to slay werewolves? Over the years, software engineers have come up with 4 different answers to that question. Let me first list them, and then I’ll explain in a bit more detail what each means.

  1. The first – obviously – is “find the silver bullet”;
  2. The second is “stop the full moon rising;”
  3. The third is “there are no werewolves, silly”;
  4. The fourth is “welcome the werewolf into your life”

So, let’s look at these approaches to software engineering one by one.


How do we “find the silver bullet”? Since 1968 and the NATO conferences in Germany, Software Engineers have been searching for the “Silver Bullet”. In the 1970’s it was structured programming and design, in the 1980’s it was object-orientation. In the 1990’s it was Computer Aided Software Engineering (or CASE) tools. In the 2000’s it was design patterns and service oriented architecture (SOA), today its DevOps.  However, Fred Brooks and others have argued that there will never be a silver bullet.

So .. if this is true, and there will never be a silver bullet, what else can a software engineer do?


Another possible solution is to go for the second option I listed, i.e. “stop the full moon rising”. I said earlier that software projects turned into werewolves because of insufficient information, insufficient time, inadequate testing, and inadequate skills. Software Engineering, as a discipline has, over many years, developed methodologies to deal with these things. We can use what’s called a “plan-driven” or a “waterfall” approach. In this approach, the development team collects lots of information about what’s required (we call this requirements engineering). They then create an architecture, a design and a detailed plan. The software is then built, according to the design and the plan. It is then exhaustively tested and delivered. Before the project starts people with all of the necessary skills are recruited into the team.

I could write at length about the pros and cons of the plan-driven approach, but – to put it briefly – it sometimes works (in other words the werewolf never appears) and it sometimes doesn’t.


The third approach is similar to the second. I’ve called it “there are no werewolves, silly”. In software project terms, what is the werewolf? It’s a project running late, it’s a project running over budget, its a project with quality issues. But how real are these? If a project is “late” what does that really mean? It means that it took longer than we thought it would. We estimated that the project would take 2 months and it lands up taking 5 months. Could it be that the problem was with the estimate that said it would take 2 months? If we had estimated 5 months we would have been spot on. So – the “there is no werewolf” approach says that we must improve our ways of planning, estimating and testing.


The 4th approach, and the one that is by far the most popular today, is to “welcome the werewolf into our lives”. It’s called Agile Development and is based on the idea that the “plan driven” or “waterfall” approach is the actual problem. How can we aim to stop the full moon rising? We know that the werewolves are out there. We know that some projects will turn nasty. We need to find ways to deal with these simple truths. So Agile says “find ways to work with changing requirements and time-lines. Welcome uncertainty and deal with it in an Agile way.” In the move away from “waterfall” to “agile” power shifts from the project manager, who owns and drives “the plan”, to the developers who ride the waves of uncertainty and deliver software piece by piece. We call it iterative and incremental development.

We will devote a whole episode to Agile. In 2005 I brought Kent Beck, one of the founders of Agile Development to South Africa. His visit has played a major part in introducing Agile to the local software engineering community. Kent has visited several times since and has become a good friend of mine and of the African software development community. I will hopefully have Kent join me for the podcast episode on Agile.


Let us briefly return to locate the narrative in relation to what was happening in my life. It was the late 1990’s and I was becoming an expert in Software Engineering. How did this involvement in Software Engineering fit in with everything else?

In my formal academic work at Wits I positioned myself as part of the software engineering discipline. Professor Alastair Walker had been running and entity called SEAL (Software Engineering Applications Lab) in the Department of Electrical Engineering for many years. Alastair is one of South Africa’s experts in Software Quality. I joined SEAL, and when Alastair left Wits in the late 1990’s, I took over as the head of software engineering.  I started teaching a course in Software Engineering to 4th year engineering students and final year computer scientists.  I also developed a number of post grad courses. In addition I supervised MSc and PhD students doing Software Engineering projects. I also did my own research and published several papers on topics in the field of Software Engineering.

However, my major interest became how I could help to transform and grow South Africa’s ICT industry.  This will be the topic for the next episode of the podcast.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.