May 27, 2022

Twins in literature and mythology are a shared theme across cultures and ontologies, exploring early concepts like duality, polarity, and unity. However, equal to these themes were explorations of concepts like loss, fratricide, and self-realization through remorse. Indeed, for every Castor and Pollux, there is a Cain and Abel, or a Romulus and Remus. Twins in myth evoke an impressionistic reaction to the triumphs and tragedy that they represent. Efforts of the current decade may tell us which of the two will ultimately characterize the concept of digital twins and their implementation.

Since being coined in 2003 by NASA executive Michael Grieves, the term “digital twin” has become an ambiguous term for the future of simulation and modeling applications. While Grieves’ earliest intention was in improving product life-cycles, the idea of high-fidelity, virtual representations of physical objects seemed like a certain future for computational modeling given technological capabilities and their increasing role in product design and iteration processes.

What was once Grieves’ insight into the future of technological applications has become a catch-all for any number of virtual models for physical entities, as well as the flow of data between them that provides parity. The resulting ambiguity in the phrase is due to its widespread usage across industries and the dynamic nature of evolving methodologies to reach the virtual “mirrored” / “twinned” ideal.

As with any other technology, there are limitations to simulations and computational models that tend to be overshadowed by their perceived benefits and desired insights. In departure from the abstract, requirements and standardizations for what constitutes a digital twin are yet to be seen. What’s more is that the concept of a digital twin is arguably not new at all, but simply an aggregation of techniques and research already in existence.

 

 

 

SPECULUM SPECULORUM

An issue with the popularity of terms like “digital twin” is that they risk becoming a misnomer due to a lack of common development methodology, much like the internet of things (IoT) platforms they rely on which require no internet connection at all. Digital twins face difficulties in procuring enough data from sensors to mirror physical entities, but also procuring and applying the correct data to become accurate representations. For example, a digital twin for a car’s braking system could use data to predict when maintenance will be needed by using predictive models for determining wear on the brake pad. However, even a specific system would rely on numerous external factors like environment, temperature, and lubrication, as well as an IoT platform for sensors that communicate and collect data from connected assets, or parts. The absence of any one of these parameters could result in incomplete or erroneous data that leads to faults in the virtual entity. Identifying missing parameters and diagnosing inconsistencies between physical and virtual entities can make their usage prohibitive in terms of both cost and labor.

The figure below shows hypothetical examples of digital twin implementations for an atomic layer deposition reactor, a complex machine used to deposit thin films onto materials.

 

At its core, digital twins are real-time, virtual representations of physical entities enabled by sensors and data. Twins can take on specific roles depending on the type of problem they solve or the advantages they offer. Adopting the model introduced by Oracle, there are three primary implementations for twins:

 

Virtual Twins

A virtual representation of a physical entity or asset. These contain manually provided data to the virtual twin from the physical counterpart best described as parameters, and requires the ability for the virtual twin to establish a connection in order to retrieve information from the physical environment. The type and number of parameters sent across this connection – as well as their accuracy – are primary attributes in grading and defining the “fidelity” of the virtual entity.

 

Predictive Twins

As the name suggests, this implementation focuses on creating predictive models and is not a static representation of a physical entity, but one based on data gathered from historic states. These twins serve to detect problems that could occur at a future state and proactively protect against them or allow designers the opportunity to diagnose and prevent the problem. Predictive twins are potentially much more simple than other implementations, and can focus on specific parameters like machine data rather than constantly receiving information from sensors and recreating a full virtual environment.

 

Twin Projections

This implementation is also used to create predictive models, but relies heavily on IoT data exchange between individually addressable devices over a common network, rather than sensors or physical environments. Applications or software that generate insights from the IoT platforms generally have access to aggregate data that is used to predict machine states and alleviate workflow issues.

There are a number of issues that each implementation faces. Maintaining connectivity to sensors for data transfer from physical entities, volume of network traffic between devices, and identification of key parameters are make-or-break in implementing successful twins. The yet ununified methods of collecting data further exacerbate the situation, with most vehicles for standardization lying in sharing models and information. 

The issue that results from relying on such collaborations has to do with data ownership; an issue already marred by controversies both moral and legal. Nonetheless, the promises of improvements for behavior, conformity, design, manufacturability, and structure have already attracted major attention from researchers.

 

 

 

BEAUTY IN COMPLEXITY

Given the broad applications and ambitious tech behind the concept, the question of what cannot be digitally twinned is interesting to consider, especially given that a digital twin of Earth is already in production. The answer depends ultimately on what a digital twin’s use-case is, and to what degree it is able to achieve and produce desired results.

Using this as a criteria doesn’t help the already broad definition of what constitutes a digital twin; one could argue that established technologies like Google Maps and Microsoft Flight Simulator are digital twins. While this may detract from its novelty, digital twin as a term also carries an undertone of possibility through connectivity. Excitement surrounding digital twins is heavily tied to the anticipation of a new level of interconnectedness between devices that enables automation and machine learning. This is seen as a new phase for technology – even a new, fourth industrial revolution, commonly referred to as Industry 4.0.

Still, the complexity of digital twins creates a high barrier for production and implementation for many prospective innovators. A general misconception is that digital twin production requires that a company simply hire data scientists and provide them an analytics platform. Domain expertise and product lifecycle management tend to be overlooked as a result.

Configuration of assets on a product also impact design and are subject to changes in scale and capabilities. Divergence from original, pilot assets can create a cascading effect of incorrect or outdated information between iterations or generations of a product. Asset changes are not always anticipated, certain assets outlast others, and asset replacement in cases of failure can mean drastic changes in design. In the case of products that go through several generations or are sold for decades on the market, synchronization of digital twins is the only solution. This could occur as often as changes are made to the product itself.

It can be challenging to coordinate with manufacturing processes and across iterations or versions as a product makes its way to the consumer. One of the primary use-cases for digital twins in manufacturing has to do with shop floor optimization. Similar focuses on improving operations are found for supply chain use-cases seeking to optimize warehouse design. Generally, study and expertise surrounding these kinds of improvements and optimizations falls under maintenance, repair, and operations (MRO).

 

 

 

SIMULATION-BASED DIGITAL TWINS

Computational simulations are a core feature that facilitates the development of digital twins. By combining high-fidelity simulations and fully coupled multiphysics solvers, companies can create models for assets and tweak them using their own data. Simulation insights create robust iteration phases that can cut process and testing costs, ultimately leading to shorter cycle times and greater management of product life cycles. Regardless of the size of a company or the scale of its products, simulations can connect the earliest designs made by research and development teams to final iterations made by manufacturing teams by providing clear, relevant physical and chemical insights.

“Ultimately, an industrial simulation that does not incorporate high-fidelity physics is essentially digital art.”

Given the increasing market focus on visual and virtual utility, impressive graphics could be misleading when it comes to digital twins. Ultimately, an industrial simulation that does not incorporate high-fidelity physics is essentially digital art. Within technical domains, the centermost aspect of a digital twin should be the fidelity with which it can predict not only steady-state processes, but also edge cases where physics is set to be challenging.

Of all the engineering design problems with applications for digital twins, problems experienced within the semiconductor industry are perhaps the most complex. In this industry’s “race to the bottom,” providing high-fidelity models requires the capability to determine the effects of disruptors like chemical impurities – which can threaten the functionality of critical components like wafers – at a margin of one part per trillion (or one nanogram per kilogram). Additional processes like atomic layer deposition are extremely sensitive to local species concentration as well as pressure profiles in the vicinity of the wafer being produced. While these are examples of restrictions based on the difficulty of working at an atomic scale, insight and perspective in the design and manufacturing process for semiconductors represents one of the most rigorous testing grounds for digital twins.

 

 

 

Thanks for reading! If you’re still curious about the topics discussed in this article, check out the following journal papers (and ask us for a free copy!):

Rasheed, Adil, Omer San, and Trond Kvamsdal. “Digital twin: Values, challenges and enablers from a modeling perspective.” Ieee Access 8 (2020): 21980-22012.

 

Rajesh, P. K., et al. “Digital twin of an automotive brake pad for predictive maintenance.” Procedia Computer Science 165 (2019): 18-24.

Interested in learning more about plasma flow simulations? Click here to take a look at our previous article. Feel free to follow us on Twitter and LinkedIn for more related news, or reach out to us directly at info@esgeetech.com.