#8. Digital Transformation

In recent years almost every statement made by South African government leaders and officials about the future of technology uses the term “4th Industrial Revolution”, or 4IR. Industry executives also follow this lead. I, however, have come to really dislike the term 4IR. Why is this so?


My problem with the use of the term 4th Industrial Revolution or 4IR is that it’s become almost meaningless. It’s become a hyped-up catch-phrase that everyone uses but very few people really understand. When I first started hearing people talking about 4IR I would make a point of asking them to try to define what it means.

Most of the responses I receive “define” 4IR as a new stage that the world is entering in the history of technological change. It started as the 1st Industrial Revolution in the 1700’s with the invention of the steam engine, followed by electricity and mass production (the 2nd Industrial Revolution) .. and then computers (the 3rd Industrial Revolution). The new stage we are now entering (the 4th Industrial Revolution) brings together big data, machine learning, AI, robotics and other new technologies.

However, if we are to define 4IR as a signpost on a timeline that neatly marks out a sequence of “industrial revolutions”, each with its own set of technical innovations, we have a definition that is far too arbitrary. Why, for example, is this the 4th Industrial Revolution and not the 5th or 6th? Why is it not just a continuation of the 3rd? And how do we select the basket of technologies that characterise this new industrial revolution. Should we include quantum computing or gene editing or block chains in the basket?

The term 4th Industrial Revolution was first coined by the German Economist Dr Klaus Schwab, the founder of the World Economic Forum (WEF), in his book called “The 4th Industrial Revolution”, published in 2016. This was one of the themes of the annual WEF gathering held in Davos Switzerland, in January 2017.  After this meeting we started hearing the term 4IR from many political and business leader.

As I see it, the difference between the 4th Industrial Revolution (if we accept that there is such a thing) and the previous ones is that 4IR hasn’t yet happened! The 1st, 2nd and 3rd Industrial Revolutions were only “discovered” many years after the fact. A meeting wasn’t held in Manchester in 1760 to announce the coming of the 1st Industrial Revolution! Klaus Schwab also made this point in his follow-up book called “Shaping the 4th Industrial Revolution” published in 2018.


While I strongly believe that the way in which many people use the term 4IR leads to a great deal of hype, there is definitely something significant happening in the world today … and its gaining momentum.

Let’s put the label “4IR” to one side for the moment and consider some of these significant changes. In 2011 the German Government launched an initiative that aimed to modernise German industry. They gave the campaign the label “Industrie 4.0”.  In many ways it was a clever marketing campaign aimed at putting a modern face on German manufacturing.

In 2016, a group of German researchers wrote an academic paper in which they made the point that “Industrie 4.0” needed a proper definition. They pulled together a great deal of material written about Industrie 4.0, and they found a number of common threads. They called these “Design Principles” for Industrie 4.0. I find these design principles really useful in understanding what has been changing over the past 10 or 15 years in relation to manufacturing and other sectors of the economy. These design principles also affect non-“hard-core”-economic sectors such as education and leisure.


So what are the Industrie 4.0 design principles? There are 4 of them that seem to be the most important to me.  I’ll list them and then describe each one in detail. They are

  • Interconnection
  • Digital twins
  • Hierarchical control, and
  • Robotics

Interconnection has to do with the Internet. Although the concept of the Internet goes back to the 1970’s … or even 1960’s … when it originated as a system to connect research computers at various American universities, it didn’t initially make a significant impact. It wasn’t until 1991, when Tim Berners-Lee invented the protocols that gave rise to the World-Wide Web, and 1993, when Marc Andreesen invented Mosaic, the first Web browser, that the world as we knew it began to change.


An interesting side-note is that South Africa was, at first, cut off from the developments around the invention of the Internet in the late 1980’s and early 1990’s because of international sanctions against Apartheid. A very strong academic boycott against South African universities was in place. Because most of the development of the early Internet was happening at universities and research institutes South African researchers and academics were excluded from these developments.

The Internet was introduced into South Africa in about 1988 via a very hush-hush arrangement between Mike Lawrie, then Director of Computing Services at Rhodes University in the Eastern Cape, and Randy Bush, who lived in Portland Oregon (in the USA). A telephone connection was set up between Rhodes University and Randy Bush’s home. Mike Lawrie and others at Rhodes University then collaborated with researchers at a few other South African universities. In particular a number of telecommunications post grad students at Wits University, including Angus Hay and Taki Milionis. They used their laboratory’s standard telephone line to link their computers at Wits to the Internet via Rhodes University and Randy Bush’s home. I took great interest in this activity because, as I described in Episode 2, I was working as part of the ANC underground on secure communication from South Africa to the rest of the world.

I remember a number of interesting incidents from this time in the early 1990s. On one occasion Angus and Taki were in huge trouble with the then-head of Wits University’s computer services, Henry Watermeyer. He hauled them into his office for “wasting university resources” on long-distance telephone calls to Grahamstown. Also, at about this time, I ran a workshop for Wits staff on something brand new called “Email”. Only about 10 people attended and most thought “it wouldn’t catch on”. 

We will definitely do a future podcast on the fascinating history of the Internet in South Africa.


The Internet and world-wide web, as we know it today, is mostly about connecting people to each other and to huge amounts of information and other resources. We’ll call this the “Internet of People”.  What we’re about to see is the Internet becoming a way to link “Things” to each other, to humans and to huge amounts of information. This is called the “Internet of Things” or IoT. We’re also beginning to see the “Internet of People” working together with the “Internet of Things”. Some are calling this the “Internet of Everything”. Imagine a world where everything (or almost everything) is interconnected and able to share information and receive information.  We aren’t there yet, but it would open up many completely new and different possibilities.


In earlier Episodes I discussed the concept of a “Digital Twin”. Some of the work I did in the UK in the 1980’s involved very early examples of digital twins. Basically it’s a way in which we use models, algorithms, data, sensors and actuators to build a realistic version in the digital world of something in the physical world. This is much more difficult than it sounds, but having a really good digital twin makes it possible to run multiple scenarios, do optimization and run simulations. If we had a completely accurate digital twin of you, the reader, we could test new drugs on it rather than putting the real You at risk.


The 3rd design principle is called “hierarchical control”. Its one of the only ways we can deal with complexity. Consider for example a self-driving car. It’s not built with a single computer managing everything that’s going on. It has sub-systems, and sub-sub-systems that deal autonomously with low level decisions.  It might have a collision detection system which stops it colliding with anything else. It might also have a traffic light detecting system whose only job is to look out for traffic lights and respond to its signal. There would then be a higher level system … or highest level system … that makes strategic and difficult decisions. If, for example, the collision system says “slam on brakes”, but the rear-view system says that there’s another car close behind you, what will the car do? A higher-level system will have to make the difficult decision.


Robotics is one of the most controversial and problematic design principles in Industrie 4.0. It deals with ways in which machines do jobs that humans once did. Some of these robots are physical electro-mechanical devices – the kinds of things we’ve seen for many years both in science fiction and in reality. In the 1980’s I worked at the GEC-Marconi Research Centre in England with industrial assembly robots that could do very impressive things. A self-driving car is an example of a “transportation robot”. We also have completely digital forms of robots such as chat-bots and robotic process automation (RPA).  

I heard at a conference last year that its now possible to go online to a virtual employment agency and hire a robotic process-enabled system to do a range of office tasks (such as processing insurance claims).. ..And it’s here that the controversy kicks in. Do we really WANT to replace humans with robots?  It becomes a social and ethical question, not simply a technical one.


The Industrie 4.0 design principles are important because they give us a far better way to define what it is that is fundamentally different about the current state of technological innovation.  We aren’t drawing arbitrary lines on history’s timeline. We also aren’t simply listing new technologies.  Some of the key technologies that impact these design principles are decades old. I worked on some of them in the 1980’s! 

A key point about the design principles I’ve listed is that its not about any of the principles taken individually.  Its about all 4 working together – the whole package!


Rather than using the poorly-defined term “4IR”, I prefer to describe the huge changes we are seeing (or about to be seeing) as a form of “digital transformation”.

The term “digital transformation” implies changes to organisations and society driven by digital innovation. The concept of digital transformation is, in fact, as old as the computer itself. Digital technology evolves on the back of constant innovation. It always has. The mainframe was replaced by minicomputers and then PC’s and smartphones and the cloud. Digital technology changed as the Internet, AI and IoT were invented and then gained prominence.

I like to think of two streams running side-by-side when I think of digital transformation. One is “evolutionary digital transformation”, the other is “revolutionary digital transformation”. Evolutionary transformation has always been the “way things are” in the digital economy. Every few years new technologies have required a change in how organisations and individuals engage with the world. All of us who work in “digital” have became used to this constant evolutionary change. Early systems that were written in Cobol to work on a mainframe have gone through numerous evolutionary changes to allow them to now work together with smartphones and chatbots. Our skills have also needed to evolve. I’ll comment further on skills later.

But, the second stream also drives digital transformation – revolutionary digital transformation. These are examples of the really big, radically different things that will change the world. In engineering we call these “step changes”.

Let me describe an example. Ster Kinekor runs a large chain of movie houses throughout South Africa. In recent years something revolutionary has happened in terms of how each cinema runs. Prevously films were reels of celluloid. A cinema needed a huge and complex projector – and a qualified projectionist – to screen the film. A fleet of vans drove around the country moving reels of films between cinemas. Each cinema had people selling tickets, selling snack, tearing tickets at the entrance and showing customers to their seats. Over recent years this has all changed. The films are now stored digitally on a server located either at the cinema or somewhere on The Cloud. Projectors are digital and can be controlled remotely from a central control room . Tickets and snack are sold online. Ster Kinekor is now a very different, digitally transformed, business.  The technologies used at today’s Ster Kinekor brings together all of the design principles discussed earlier. It is a good example of REVOLUTIONARY digital transformation.

This is an example of what I see as the so-called 4th Industrial Revolution. There are still very few such examples, and it is still far short of the actual potential implied by the Industrie 4.0 design principles.

My key point I am making is that “Revolutionary Digital Transformation”, also known as 4IR, has not yet happened. What is happening is a combination of both evolutionary digital transformation and the beginnings of revolutionary changes. Its really important, however, for society – each society, because each of our realities is very different from that in other countries – to understand what is actually happening.  Its also important that all of us – not just the geeks – to have a voice in making decisions about what kind of technology will feature in our future.


There is also a great deal of hype around skills in the context of “4IR”. The impression is being created that everyone will need to be re-trained, and that everyone will need to be an expert in robotics and AI. There are also fears that “old” IT skills are no longer relevant. While there is some truth in some of this, I think it’s something of an over-reaction.

If we go back to what I said earlier about the dual stream of evolutionary and revolutionary digital transformation, the “old” is not going away. We will still need large numbers of IT professionals who can do coding in languages such as Java, C++ and C#. We still need Cobol and Fortran programmers, although these are very old skills. We also need project managers and testers, user interface designers and database experts. Some of these skilled people will have to add to their skill sets.  We are also going to need many people with new skills. We will need people who can work with data (i.e. data scientists), AI and machine learning.  We will need people who can deal with IoT, block chains and quantum computing. These are in addition to the older legacy skills.

When considering “skills of the future” it is important that everyone – our whole population – should be “digitally literate”. School children don’t all need to become coders or robotics engineers, but they do need, to understand what coding and robotics are about.

I personally think that the best way to prepare the population for digital transformation in the future is to ensure that everyone is able to deal with rapid digital change. To do this the most important skills people should have are: good foundational skills; the ability to learn new things; the ability to be a good problem solver; good communication skills – particularly with respect to modern communication tools; and, some ability to act in an entrepreneurial way.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.