Industry 4.0 and Digital Twins: Key lessons from NASA
Ben Hicks, Professor of Mechanical Engineering at The University of Bristol looks at the origins of the digital twin and explores its true purpose in digital transformation.
At the time of writing this article the concept of the Digital Twin sits at the very peak of inflated expectation in Gartner’s 2018 hype cycle of emerging technologies. Given this, it would appear at first glance as if the Digital Twin, like so many before it, will plummet into the trough of disillusionment within the next year or two. But is this really the case given such a strong and unwavering interest from industry, where senior leadership teams espouse the game changing potential of industry 4.0 and its imminent impact on productivity and all things measurable?
Well it is hard to tell and of course nearly all predictions, no matter whether from economists or philosophers, never quite come true. So, in this article we will refrain from predictions and focus more on what the Digital Twin actually is, its current application and its future potential, which we believe demonstrates that the Digital Twin may be here to stay and that the trough of disillusionment may not be as deep as might be expected.
Let’s start with the definition of a Digital Twin. I have attended many fora - both industry and academic - in which differing understandings and definitions of Digital Twins are presented. Some stating that they already have them, others stating that they need them. Such widely varying understanding is arguably one of the main contributors to unmet expectations and disillusionment and, in the long-term, can act to hinder the potential advancement of the digital twin paradigm and ultimately the benefits that can be realised.
So, where did the concept come from? Like so many concepts in engineering (c.f. System Engineering and Condition Monitoring) the phrase was first coined by John Vickers of NASA in 2002 and later used by Michael Grieves in 2003. While Grieves’ thinking was grounded in the field of product lifecycle management, NASA’s interest in the Digital Twin was motivated by its requirement to operate, maintain and repair physical systems that are in space. By way of example, we all recall Apollo 13, and the fact that NASA retained a mirrored system on earth that allowed engineers and astronauts to determine how they could rescue the mission. It was the concept of the mirrored system and a desire to reduce cost and resources that motivated NASA to develop digital twins for its space assets.
While perhaps lost in many of today’s digital twin espousals is the fact that one of the most significant challenges for NASA was not the development of the geometric or multi-physics model but identifying, capturing and updating the digital twin with data describing the condition of the physical asset. It is this concept of continuous or periodic ‘twinning’ of the digital to the physical in order to mirror condition that separates virtual prototyping and model-based engineering from a digital twin.
To some this might seem to be rather pedantic, I can assure readers that it is far from it. The key point is that, in general, the purpose of the digital twin is different to that of a virtual prototype. In most cases the virtual prototype is used to guide a design and development process with the ultimate aim of creating a sufficiently detailed definition of the product or system such that it can be produced. For the most part, while prototypes are constructed in order to support development and/or verify and validate computer models, physical products in physical environments are not widely used, and hence the data and understanding that can be generated for these aspects is comparatively limited. In contrast, when a product or system is produced and in use, such data can be collected and the aim of any modelling is, in general, to help maintain condition and/or to adapt or modify the product or system for changing conditions or environment. Now, if time and resources were unlimited it might be entirely possible to consider all scenarios and perspectives at the time of design but this is not the reality we live in. Consequentially, the motivations and understanding generated by virtual prototypes and models at the design stage versus the in-service stage are likely to be different.
By way of example, consider a simple conveyor. For the purpose of designing the conveyor, relatively straightforward simulation of the loads might be undertaken for structural design and to inform the specification of standard transmission elements, such as a motor, gearbox and bearings. In contrast, in order to create a digital twin, the motor current and the temperature may be recorded and used to inform a physics model capable of predicting remaining useful life as well as identifying abuse loads that exceed the design envelope. In this example, the models that underpin the digital twin are distinct to those used in its design and development. It is this difference in purpose and modelling constructs that distinguishes a digital twin from a virtual prototype.
If this is the case and it is all about the modelling why is there so much attention being given to Digital Twins now when multi-physics simulations are relatively mature? Well let’s revisit NASA’s early work. As alluded to in the previous paragraph, the challenge for NASA was establishing what data to collect, how to capture it, how to store it and importantly how to use it. In the early 21st century this task was costly even for NASA. While sensing and data exchange in extreme environments is still not cheap the cost and ease of data acquisition has reduced by an order of magnitude in the last decade. For the case of the conveyor, hardware to sense and transmit data on electrical current and temperature can be acquired for less than £30. It is this low cost data acquisition and the ease of connectivity of devices (the IoT) that has made the paradigm of the Digital Twin affordable for all and helped propel the Digital Twin paradigm up the hype cycle and to its peak.
In summary, in the early 21st century the Digital Twin would have been viewed by many as expensive, ‘nice to have’ or only of real value in extreme environments. Today, with the incredible advances in computing power, multi-physics simulation and low-cost sensing and data storage the concept and philosophy of the Digital Twin can be accessed by any organisation. However, although the technology is now relatively affordable embarking on a Digital Twin project should be treated like any other major organisational change project. This is because the true opportunity of the Digital Twin lies not doing what organisations are already doing but looking to do things differently. To demonstrate this, the previously mentioned conveyor manufacturer is no longer just selling conveyors but providing power-by-the-hour and availability contracts due to the ability to monitor usage and automatically determine health and onset of maintenance issues.
Based on a thorough review of literature and the aforementioned confusion amongst many industrialist and academics around the definition of a Digital Twin, a synthesised definition is given below.
A Digital Twin is an appropriately synchronised body of useful information (structure, function, and behaviour) of a physical entity in virtual space, with flows of information that enable convergence between the physical and virtual states.
The Digital Twin can exist at any stage of the life-cycle and aims leverage aspects of the virtual environment (high-fidelity, multi-physics, external data sources, etc.), computational techniques (virtual testing, optimisation, prediction, etc.), and aspects of the physical environment (historical performance, customer feedback, cost, etc.) to improve elements of the product (performance, function, behaviour, manufacturability, etc.) over the life-cycle.
Professor Ben Hicks