Mirroring the World with Digital Twins

Mirroring the World with Digital Twins

Twins in literature and mythology are a shared theme across cultures and ontologies, exploring early concepts like duality, polarity, and unity. However, equal to these themes were explorations of concepts like loss, fratricide, and self-realization through remorse. Indeed, for every Castor and Pollux, there is a Cain and Abel, or a Romulus and Remus. Twins in myth evoke an impressionistic reaction to the triumphs and tragedy that they represent. Efforts of the current decade may tell us which of the two will ultimately characterize the concept of digital twins and their implementation.

Since being coined in 2003 by NASA executive Michael Grieves, the term “digital twin” has become an ambiguous term for the future of simulation and modeling applications. While Grieves’ earliest intention was in improving product life-cycles, the idea of high-fidelity, virtual representations of physical objects seemed like a certain future for computational modeling given technological capabilities and their increasing role in product design and iteration processes.

What was once Grieves’ insight into the future of technological applications has become a catch-all for any number of virtual models for physical entities, as well as the flow of data between them that provides parity. The resulting ambiguity in the phrase is due to its widespread usage across industries and the dynamic nature of evolving methodologies to reach the virtual “mirrored” / “twinned” ideal.

As with any other technology, there are limitations to simulations and computational models that tend to be overshadowed by their perceived benefits and desired insights. In departure from the abstract, requirements and standardizations for what constitutes a digital twin are yet to be seen. What’s more is that the concept of a digital twin is arguably not new at all, but simply an aggregation of techniques and research already in existence.

 

 

 

SPECULUM SPECULORUM

An issue with the popularity of terms like “digital twin” is that they risk becoming a misnomer due to a lack of common development methodology, much like the internet of things (IoT) platforms they rely on which require no internet connection at all. Digital twins face difficulties in procuring enough data from sensors to mirror physical entities, but also procuring and applying the correct data to become accurate representations. For example, a digital twin for a car’s braking system could use data to predict when maintenance will be needed by using predictive models for determining wear on the brake pad. However, even a specific system would rely on numerous external factors like environment, temperature, and lubrication, as well as an IoT platform for sensors that communicate and collect data from connected assets, or parts. The absence of any one of these parameters could result in incomplete or erroneous data that leads to faults in the virtual entity. Identifying missing parameters and diagnosing inconsistencies between physical and virtual entities can make their usage prohibitive in terms of both cost and labor.

The figure below shows hypothetical examples of digital twin implementations for an atomic layer deposition reactor, a complex machine used to deposit thin films onto materials.

 

At its core, digital twins are real-time, virtual representations of physical entities enabled by sensors and data. Twins can take on specific roles depending on the type of problem they solve or the advantages they offer. Adopting the model introduced by Oracle, there are three primary implementations for twins:

 

Virtual Twins

A virtual representation of a physical entity or asset. These contain manually provided data to the virtual twin from the physical counterpart best described as parameters, and requires the ability for the virtual twin to establish a connection in order to retrieve information from the physical environment. The type and number of parameters sent across this connection – as well as their accuracy – are primary attributes in grading and defining the “fidelity” of the virtual entity.

 

Predictive Twins

As the name suggests, this implementation focuses on creating predictive models and is not a static representation of a physical entity, but one based on data gathered from historic states. These twins serve to detect problems that could occur at a future state and proactively protect against them or allow designers the opportunity to diagnose and prevent the problem. Predictive twins are potentially much more simple than other implementations, and can focus on specific parameters like machine data rather than constantly receiving information from sensors and recreating a full virtual environment.

 

Twin Projections

This implementation is also used to create predictive models, but relies heavily on IoT data exchange between individually addressable devices over a common network, rather than sensors or physical environments. Applications or software that generate insights from the IoT platforms generally have access to aggregate data that is used to predict machine states and alleviate workflow issues.

There are a number of issues that each implementation faces. Maintaining connectivity to sensors for data transfer from physical entities, volume of network traffic between devices, and identification of key parameters are make-or-break in implementing successful twins. The yet ununified methods of collecting data further exacerbate the situation, with most vehicles for standardization lying in sharing models and information. 

The issue that results from relying on such collaborations has to do with data ownership; an issue already marred by controversies both moral and legal. Nonetheless, the promises of improvements for behavior, conformity, design, manufacturability, and structure have already attracted major attention from researchers.

 

 

 

BEAUTY IN COMPLEXITY

Given the broad applications and ambitious tech behind the concept, the question of what cannot be digitally twinned is interesting to consider, especially given that a digital twin of Earth is already in production. The answer depends ultimately on what a digital twin’s use-case is, and to what degree it is able to achieve and produce desired results.

Using this as a criteria doesn’t help the already broad definition of what constitutes a digital twin; one could argue that established technologies like Google Maps and Microsoft Flight Simulator are digital twins. While this may detract from its novelty, digital twin as a term also carries an undertone of possibility through connectivity. Excitement surrounding digital twins is heavily tied to the anticipation of a new level of interconnectedness between devices that enables automation and machine learning. This is seen as a new phase for technology – even a new, fourth industrial revolution, commonly referred to as Industry 4.0.

Still, the complexity of digital twins creates a high barrier for production and implementation for many prospective innovators. A general misconception is that digital twin production requires that a company simply hire data scientists and provide them an analytics platform. Domain expertise and product lifecycle management tend to be overlooked as a result.

Configuration of assets on a product also impact design and are subject to changes in scale and capabilities. Divergence from original, pilot assets can create a cascading effect of incorrect or outdated information between iterations or generations of a product. Asset changes are not always anticipated, certain assets outlast others, and asset replacement in cases of failure can mean drastic changes in design. In the case of products that go through several generations or are sold for decades on the market, synchronization of digital twins is the only solution. This could occur as often as changes are made to the product itself.

It can be challenging to coordinate with manufacturing processes and across iterations or versions as a product makes its way to the consumer. One of the primary use-cases for digital twins in manufacturing has to do with shop floor optimization. Similar focuses on improving operations are found for supply chain use-cases seeking to optimize warehouse design. Generally, study and expertise surrounding these kinds of improvements and optimizations falls under maintenance, repair, and operations (MRO).

 

 

 

SIMULATION-BASED DIGITAL TWINS

Computational simulations are a core feature that facilitates the development of digital twins. By combining high-fidelity simulations and fully coupled multiphysics solvers, companies can create models for assets and tweak them using their own data. Simulation insights create robust iteration phases that can cut process and testing costs, ultimately leading to shorter cycle times and greater management of product life cycles. Regardless of the size of a company or the scale of its products, simulations can connect the earliest designs made by research and development teams to final iterations made by manufacturing teams by providing clear, relevant physical and chemical insights.

“Ultimately, an industrial simulation that does not incorporate high-fidelity physics is essentially digital art.”

Given the increasing market focus on visual and virtual utility, impressive graphics could be misleading when it comes to digital twins. Ultimately, an industrial simulation that does not incorporate high-fidelity physics is essentially digital art. Within technical domains, the centermost aspect of a digital twin should be the fidelity with which it can predict not only steady-state processes, but also edge cases where physics is set to be challenging.

Of all the engineering design problems with applications for digital twins, problems experienced within the semiconductor industry are perhaps the most complex. In this industry’s “race to the bottom,” providing high-fidelity models requires the capability to determine the effects of disruptors like chemical impurities – which can threaten the functionality of critical components like wafers – at a margin of one part per trillion (or one nanogram per kilogram). Additional processes like atomic layer deposition are extremely sensitive to local species concentration as well as pressure profiles in the vicinity of the wafer being produced. While these are examples of restrictions based on the difficulty of working at an atomic scale, insight and perspective in the design and manufacturing process for semiconductors represents one of the most rigorous testing grounds for digital twins.

 

 

 

Thanks for reading! If you’re still curious about the topics discussed in this article, check out the following journal papers (and ask us for a free copy!):

Rasheed, Adil, Omer San, and Trond Kvamsdal. “Digital twin: Values, challenges and enablers from a modeling perspective.” Ieee Access 8 (2020): 21980-22012.

 

Rajesh, P. K., et al. “Digital twin of an automotive brake pad for predictive maintenance.” Procedia Computer Science 165 (2019): 18-24.

Interested in learning more about plasma flow simulations? Click here to take a look at our previous article. Feel free to follow us on Twitter and LinkedIn for more related news, or reach out to us directly at info@esgeetech.com.

Plasma Processing with Carbon and Fluorine

Plasma Processing with Carbon and Fluorine

 As the semiconductor industry continues to shrink critical feature sizes and improve device performance, challenges in etch processing are increasing as a result of smaller features being processed with new device structures. Higher density and higher-aspect ratio features are introducing new challenges that require additional innovation in multiple areas of wafer processing. As a result of their complexity, these innovations are increasingly reliant on comprehensive physical, chemical, and computational models of plasma etch processes.

Plasma etching is a critical process used in semiconductor manufacturing for removing materials from unit surfaces and remains the only commercially viable technology for anisotropic removal of materials from surfaces.  Although plasma was introduced into nano-electric fabrication processes in the mid-1980s and transistor size has shrunk by nearly two orders of magnitude, starting at 1.0 μm to ∼0.01 μm today, the progress was mainly driven by trial and error. Unfortunately, detailed mechanisms for plasma etch processes are not well understood yet for a majority of process gasses. Therefore, the development, improvement, and validation of these mechanisms remains a constant endeavor. This would open up more opportunities for innovation in this area.

Every Last, Atomic Detail

The growing costs of etching are threatening to slow the rate of improvement for density and process speed, though manufacturing expenses can be mitigated using simulation tools. Each generation of devices requires more layers, more patterning, and more cycles of patterning that continue to increase overall cost and complexity. Even if component size can be reduced further, this presents manufacturers with additional costs in developing even more precise lithography and etching machines. This highlights the balance between atomic layer processing in high volumes and the need for a renewed approach to miniaturization in order to extend Moore’s Law.

Plasma etching takes place as part of the process of wafer fabrication, which in turn is a main process in the manufacturing procedure for semiconductors. For a wafer to be finalized, cycles must be completed potentially hundreds of times with different chemicals. Each cycle increases the number of layers and features that the desired circuit carries. 

Wafers begin as pure, non-conductive, thin silicon discs generally ~6 to 12 inches in diameter. These wafers are made of crystalline silicon, with extreme attention to chemical purity before oxidation and coating can occur. Oxidation is one of the oldest steps in semiconductor manufacturing, and has been used since the 1950s. Silicon has a great affinity for oxygen, and thus it is absorbed readily and transferred across the oxide. Layers of insulation and conductive materials are then coated onto the wafer before a photoresist – a mask for etching into the oxide – can be applied.  Photoresist turns into soluble material when exposed to ultraviolet light, so that exposed areas can be dissolved using a solvent. The resulting pattern is what gives engineers control at later stages like etching and doping, when devices are formed. Integrated circuit patterns are first mapped onto a glass or quartz plate, with holes and transparencies that allow light to pass through, with multiple plates masking each layer of the circuit. The aforementioned ultraviolet light is then applied to transfer patterns from photoresist material coatings onto the wafer, with the photoresist chemicals also being removed prior to etching. It is at this point that a feed gas stream – a mixture of gasses with a carrier (like nitrogen) and an etchant (or other reactive gas) – is introduced to create chemical reactions that remove materials from the wafer. 

During the etching process, areas left unprotected by the photoresist layer are chemically removed. Etching generally refers to removal of materials, however it requires that photomask layers and underlying materials remain unaffected in the process. In some cases, as with anisotropic etches, materials are removed in specific directions to produce geometric features like sharp edges and flat surfaces, which can also increase etch rates and lower cycle times. Metal deposition and etching includes placing metal links between transistors, and is one of the final steps before a wafer can be completed.

Both physical and chemical attributes are present in the etching process. The active species (atoms, ions, and radicals) are generated in the electron impact dissociation reaction of feed gasses. Feed gas mixtures for plasma etching are usually complex due to the conflicting requirements on etch rate, selectivity to mask and underlayer, and anisotropy. Also, the plasma itself dissociates the feed gas into reactive species which can react with each other in the gas phase and on the surface, leading to a further cascade of species generation in the plasma.

The most common etchant atoms are fluorine (F), chlorine (Cl), bromine (Br), and oxygen (O), which are usually produced by using the mixtures of chemically reactive gasses, such as CF, O, Cl, CCl, HBr, and CHCl. Inductively coupled as well as capacitively coupled plasma reactors (ICP and CCP, respectively) have found the most widespread use in semiconductor manufacturing. ICP sources allow the generation of relatively dense plasmas (∼10¹⁶–10¹⁷ m⁻³) at relatively low gas pressures (1–10 mTorr). With independent wafer biasing, they also allow independent control of the ion flux and ion energy at the wafer surface. This process can be engineered to be chemically selective in order to remove different materials at different rates.

Molecular Design in Mind

One of the most important applications of plasma etching is the selective, anisotropic removal of patterned silicon or polysilicon films. Halogen atom etchants (F, Cl, Br) bearing precursors’ feedstock gasses are almost always used for this purpose. Common feedstock gasses for F atoms are CₓF, SF, and NF. The understanding of physical and chemical processes in reactive plasmas requires reliable elementary finite-rate chemical reaction mechanisms. Tetrafluoromethane (CF) is one of the most frequently used gasses for the generation of F atoms. The admixture of a small percentage of oxygen to a CF plasma dramatically increases the etch rates of silicon surfaces, and can also be used to control the lateral etching of silicon.

Distribution of electron temperatures in an ICP reactor modeled using VizGlow.

Tetrafluoromethane (CF) is an important feedgas for plasma etching of silicon. It is relatively easy to handle, non-corrosive, and has low toxicity. CF₄ has no stable electronic states which means that the electron energy is spent on the generation of chemically active ions and radicals without electronic excitation losses. While tetrafluoromethane plasmas have been studied since the early development of plasma etching processes, the influence of various gas-phase and surface reactions on the densities of active species is still poorly understood.

VizGlow is a full-featured, high-fidelity simulation tool for the modeling of chemically reactive plasmas, which are present in half of the steps undertaken in the semiconductor fabrication process described above. The characteristics of gas species and kinetic modeling of their reactions remain an area with yet unexplored potential for further innovation. Radicals created by plasmas are extremely reactive due to unpaired electrons, which is used by semiconductor engineers to speed up the process and cycle times. The same is true for deposition processes, where radicals prevent damage to the chip as it cools from the >1000 °C temperatures produced within etching equipment. Throughout these processes, defects, impurities, and nonuniformities can be detected and diagnosed with help from simulated models. Simulations using VizGlow can help guide the design iterations to avoid operating conditions that could comprise wafers even after months of processing.

Thanks for reading! If you’re still curious about the topics discussed in this article, check out the following journal papers (and ask us for a free copy!):

Levko, Dmitry, et al. “Computational study of plasma dynamics and reactive chemistry in a low-pressure inductively coupled CF4/O2 plasma.” Journal of Vacuum Science & Technology B, Nanotechnology and Microelectronics: Materials, Processing, Measurement, and Phenomena 39.4 (2021): 042202.

Levko, Dmitry, Chandrasekhar Shukla, and Laxminarayan L. Raja. “Modeling the effect of stochastic heating and surface chemistry in a pure CF4 inductively coupled plasma.” Journal of Vacuum Science & Technology B, Nanotechnology and Microelectronics: Materials, Processing, Measurement, and Phenomena 39.6 (2021): 062204.

Levko, Dmitry, et al. “Plasma kinetics of c-C4F8 inductively coupled plasma revisited.” Journal of Vacuum Science & Technology B, Nanotechnology and Microelectronics: Materials, Processing, Measurement, and Phenomena 40.2 (2022): 022203.

Lee, Chris GN, Keren J. Kanarik, and Richard A. Gottscho. “The grand challenges of plasma etching: a manufacturing perspective.” Journal of Physics D: Applied Physics 47.27 (2014): 273001.

Kanarik, Keren J. “Inside the mysterious world of plasma: A process engineer’s perspective.” Journal of Vacuum Science & Technology A: Vacuum, Surfaces, and Films 38.3 (2020): 031004.

 

Marchack, N., et al. “Plasma processing for advanced microelectronics beyond CMOS.” Journal of Applied Physics 130.8 (2021): 080901.

Interested in learning more about plasma flow simulations? Click here to take a look at our previous article. Feel free to follow us on Twitter and LinkedIn for more related news, or reach out to us directly at info@esgeetech.com. This post’s feature image is by Laura Ockel & Unsplash.

Esgee Presenting at This Year’s SPIE Advanced Lithography+Patterning 2022 Conference

Esgee Presenting at This Year’s SPIE Advanced Lithography+Patterning 2022 Conference

SPIE, the international society for optics and photonics, is holding its annual Advanced Lithography & Patterning Conference in San Jose, California from April 24th-28th. This conference gathers a community of experts in semiconductor design and fabrication to review current research, discuss major breakthroughs, and network with peers.

EsgeeTech’s paper, “VizGlow-MPS: a multi-fidelity process simulator for fast, yet accurate, semiconductor process design and optimization,” is being featured as part of the conference program. The paper discusses how high-fidelity models made with our software, VizGlow™ ,  provide experimentally validated results for equipment operation that inform reduced-order models which predict results in a few minutes of wall-clock time. The approach constitutes a “digital twin” for process reactors with multiple levels of fidelity that a process engineer can choose from. This approach is demonstrated on c-C4F8 inductively coupled plasma and pulsed CF4/H2 capacitively coupled plasma widely used in etching applications.

 

Esgee’s presentation is on April 27th from 2:50 PM – 3:10 PM PDT (4:50PM – 5:10PM CST) in Convention Center room 210C.

 

 

Photo by Anne Jacko // CC BY-SA 2.0

Clearing the Dust with VizGrain

Clearing the Dust with VizGrain

In the fight against airborne particulates, semiconductor manufacturers face the unrelenting threat of contamination within their labs and facilities. Airflow, microfiltration, air ionization, air pressure, humidity controls, polymer toolsets, anterooms / air showers, and cleanroom suits are just a few of the considerations that manufacturers must make, all in the name of isolating lab processes from the outside world. Despite extensive procedures and costly investments, it is nearly impossible to produce an environment completely devoid of pollutants, and the most common of all pollutants is microscopic, fine particles of solid matter known simply as “dust.”

Unlike the dusty plasmas found in the Earth’s mesosphere and among celestial bodies that fascinate astrophysicists, dusty plasmas in labs here on Earth are a constant source of frustration within the semiconductor industry. Their presence within reactors and manufacturing equipment continues to threaten contamination of wafers and other critical components.

Perhaps someday in the future, the presence of dusty plasmas in semiconductor manufacturing facilities will cease to be, either by way of technological innovation or perfection of cleanroom procedure. But until that time, simulations must account for every piece of relevant physics in order to create a realistic model of these environments.

Making the Dust Fly for Semiconductor Manufacturers

 Dust particles are common contaminants of plasma processing discharge chambers used in the semiconductor industry for etching and deposition. These particles can range from several nanometers to several hundred micrometers in size, accrue a relatively large negative charge in the plasma, and consequently are electrostatically trapped in the plasma. Large particles usually accumulate near the sheath edge, while small particles accumulate in the center of the discharge chamber where the electrostatic potential is usually the most positive.

Trajectories of particles (a) with Wafer Bias and (b) without Wafer Bias. Size of particles ranges from 0.15 microns to 0.5 microns. Image Source: Kobayashi et al.

For semiconductor manufacturers, formation of particles within a plasma and the effect of dust particles on a semiconductor’s processing surface are determining factors for overall process quality and yield. Developing macro-particle kinetic models that account for all associated physics (macro-particle growth, charge-up, and transport within a plasma) is a necessity in modern semiconductor processing reactor design.

 As a result, the applications for studies surrounding dusty plasmas focus primarily on particle transport and plasma distribution. This is the case in plasma etching / deposition systems, where particle behavior can be expressed through calculation of measured gas temperature distribution and thermophoretic force. Thermophoretic force can then be controlled using plasma distribution controls, and by changing gas temperature distributions across wafers.

It was only until recently that these breakthroughs in particle control and plasma distribution relied on expensive and elaborate experiments. Now, particle-based simulations through software like VizGrain are able to predict these behaviors while including the core features necessary for creating a computational model:

  1. A multi-subdomain capability, where multiple solids and gas regions can be described simultaneously.
  2. Unstructured meshing for representing complex topologies with fine geometric features.
  3. Modeling of static electric and magnetic fields as well as electromagnetic waves through coupling with an electromagnetics solver.
  4. Treating subsets of the overall gas composition as a continuum through coupling with a classical fluid flow solver and a plasma solver.

VizGrain: A Versatile Computational Tool for Particle Simulations

 A unique aspect of VizGrain is that it allows computational modeling of particle dynamics in a variety of systems, including:

 

  • rarefied gas dynamics
  • gas discharge plasmas
  • macroscopic particle dynamics (e.g., dust particles, droplets, etc.)

VizGrain allows working with atomic-size particles as well as particles with finite macroscopic sizes. The former approach is used to model rarefied gas dynamics and conventional non-equilibrium plasmas, while finite-size macro-particles are considered for models of dusty plasmas, aerosols, and droplets to name a few. In this latter case, there is also consideration for electrical charge-up of particles in a plasma environment. Additionally, these models feature a comprehensive variety of drag forces that can act on both atomic and macro-particles.

 VizGrain solves governing equations that describe the transport (motion and collisions) as well as the generation and destruction of particles in a specified domain. A number of different particle types, both “atomic” and “macro-scale,” can be solved.

Dusty plasma dynamics in a capacitively coupled plasma (CCP) reactor generated using VizGlow™ (fluid) and VizGrain (dust particles).

Electrically neutral species and radicals, as well as electrically charged species like electrons and positive and negative ions are atomic particle types that can be considered simultaneously. Macro-scale particle types, however, include molecular cluster and larger micron-to-millimeter scale dust particles.
All such particles have mass, charge, and size attributes. The mass of atomic particles is immutable, while those of macro-scale particles can change based on governing laws. Similarly, the atomic particle charge is fixed while macro-scale particle charge can change based on charge-up processes.

 

In VizGrain, all the particles in a swarm are classified according to “particle type.”  All particles of a particular type have individual properties such as mass, charge, and size (diameter or cross section). Extensive use of object-oriented programming principles means that implementation is modular, and extending the list of properties is possible whenever necessary.

 

VizGrain also offers flexibility in representing practical applications through complex geometries. Cells can be made from a mesh using triangles and quadrilaterals (for 2D), as well as tetrahedra, hexahedra, prisms, pyramids, or even a mixture of all the aforementioned cell types together.

 

Additionally, meshes can be prepared in a variety of formats that are used commonly by practitioners and imported into VizGrain. The code also outputs the maximum number of particles that exist in a cell over the whole mesh at selected screen output intervals, with warnings for severely skewed cells in the mesh that could portend poor quality solutions (especially in the case of electrostatic potential in PIC simulations). Note that the accuracy of pure particle simulation results are usually insensitive to the quality of the mesh, which has been confirmed in VizGrain simulations.

 

 


Thanks for reading! If you’re still curious about the topics discussed in this article, check out the following journal papers (and ask us for a free copy!):

Levko, Dmitry, et al. “VizGrain: a new computational tool for particle simulations of reactive plasma discharges and rarefied flow physics.” Plasma Sources Science and Technology 30.5 (2021): 055012.

Kobayashi, Hiroyuki, et al. “Investigation of particle reduction and its transport mechanism in UHF-ECR dielectric etching system.” Thin Solid Films 516.11 (2008): 3469-3473.

Merlino, Robert. “Dusty plasmas: from Saturn’s rings to semiconductor processing devices.” Advances in Physics: X 6.1 (2021): 1873859.

Merlino, Robert L., and John A. Goree. “Dusty plasmas in the laboratory, industry, and space.” PHYSICS TODAY. 57.7 (2004): 32-39.

Interested in learning more about plasma flow simulations? Click here to take a look at our previous article. Feel free to follow us on Twitter and LinkedIn for more related news, or reach out to us directly at info@esgeetech.com.

Modern Solutions for Global Semiconductor Manufacturing

Modern Solutions for Global Semiconductor Manufacturing

Recently, consumers have faced rising prices for semiconductor-powered devices, with shortages affecting the availability of products that serve their daily needs. With increased demand for semiconductors in key areas like the automotive industry, healthcare, and within AI-enabled products, manufacturers are vying to remain competitive while making next-generation breakthroughs in order to meet current demand.

So, the question is: how can industrial researchers continue to innovate in order to boost semiconductor production? And how can simulations relieve the pressure for the semiconductor industry to meet ever-growing global demands? 

A joint report authored by Boston Consulting Group (BCG) and The Semiconductor Industry Association (SIA) is aimed at combating semiconductor shortages by profiling risks in the current international supply chain and highlighting semiconductors as a central component of shared economic stability across the globe. Central to the report’s findings are a series of statistics that characterize the current issues the world faces in securing a future where semiconductor demands are met as they continue to grow over the next decade. 
 

The Global Semiconductor Supply Chain at a Glance

The current cooperative structure of the global supply chain for semiconductors is as unique as it is complex, with a web of destinations across the globe from the earliest stages of research and design to the final point of sale.
Since the 1970s, specialization within these national and regional stages has contributed to the chain’s ability to produce at the speed of demand, while also innovating and improving the capabilities of semiconductors faster than any one country or region could.
 
Figure 1 below shows the current global semiconductor market share by region as of 2020. South Korea and the United States account for two-thirds of total market production and sales.

In addition to utilizing the specializations offered by each of the six major regions (Europe, Japan, Mainland China, South Korea, Taiwan, and the United States) that contribute to the global supply chain, the roundabout system also makes use of favorable trade conditions among the participating countries to keep production costs and consumer prices affordable. Figure 2 below shows the usage of semiconductors by industry.

Major risks in disruptions to the current supply chain could lead to a sharp rise in the cost of devices to producers and consumers alike. In a hypothetical situation where the global chain is replaced with self-sufficient regions, the report forecasts up to $900 – 1,225B of upfront investment required to maintain current output and meet rising demands, with an overall cost increase of 35% – 65% for consumers if regional and comparative advantages are ignored.
 
National policies within key regions, most notably Mainland China – which has massive industrial and manufacturing capabilities – have already placed self-sufficiency as a high priority for their future development in semiconductors.
 
Similar policies in other nations could leave local markets open to unforeseen factors, including greater competition for materials and additional costs in their securing and transportation. Situations like natural disasters and geopolitical conflict could destabilize systems that seek to decouple from the international chain, leading to regional shortages of semiconductors and additional issues with production for critical communications and security sectors.
 

Researching and Developing Solutions for the Market

In addition to the current issues that the international supply chain faces, SIA’s report highlights the importance of research and development, which is the primary way that producers maintain state-of-the-art techniques and provide security in their devices.
 
Although the speed of innovation and change in major market devices like consumer electronics is visible from year-to-year, the time for techniques developed at pre-competition research stages to be utilized at a mass scale and included within the global chain can take decades. As a result, original equipment manufacturers (OEMs) and integrated device manufacturers (IDMs) face upfront costs in both R&D and capital expenditure, with years before seeing a return on investment in these areas.
 
Despite the delayed turnaround for companies investing and participating in pre-competitive and basic research, cooperation at these early stages enables chips to become smaller while increasing performance. Recent innovations like 5G, internet of things (IoT), and autonomous vehicles all began their journey to widespread use at this stage. Figure 3 below illustrates regional spending in R&D among key regions as a percentage of sales.

SIA’s report also cites the need for utilization of emergent technologies in alleviating risks and constraints in the global chain, with modern inventions like augmented and virtual reality (AR/VR) playing a crucial role in enabling operations to continue remotely throughout the pandemic.
 
Simulation also provides this principle effect of bridging digital and physical worlds by allowing manufacturers to cut material costs and risk of exposure to hazardous materials, all without sacrificing insights that physical experiments and trials offer.
 

Unique Solutions Require Detailed, High-fidelity Simulations

The use of simulation software and digitally based tools to further minimize risks that current global producers face is both economic and modern, and its viability as an industry-wide solution will only become greater as time continues. Simulations offer additional innovation points through applications for commonly used equipment in the semiconductor industry, such as plasma reactors, with details like simulated angular distribution functions deciding process parameters like excitation frequency and excitation voltage.

Industry leaders like Dr. Peter Ventzek and Dr. Alok Ranjan of Tokyo Electron Ltd. – a global supplier of equipment used to fabricate integrated circuits- have already taken advantage of high-fidelity plasma simulation and processing to develop new techniques with a wide array of applications for the semiconductor industry, using the insights offered by numerical simulations using VizGlow™. Here are a few examples of patented methods and techniques using simulations that are contributing to the semiconductors of today and tomorrow:   

·       Mode-switching plasma systems and methods that allow manufacturers to reduce minimum-required features and the cost of ICs, while also increasing packing density of components. Manufacturers working at the atomic scale are able to continue scaling semiconductor devices with consideration for constraints like equipment configurability, equipment cost, and wafer throughput.

·       Techniques that include formation, patterning, and removal of materials in order to achieve physical and electrical specifications for the current and next generation of semiconductor. Plasma etching and deposition are prone to issues with decoupling source power (SP) and bias power (BP) effects, resulting in reduced control and precision. Decoupling these effects helps reduce cross-talk between a source and bias and in turn enhances control while decreasing complexity.

·       Utilizing pulsed electron beams to create new plasma processing methods, which enable reduction of feature size while maintaining structural integrity. As device structures continue to densify and develop vertically, these methods which produce atomic-level precision in plasma processes will be useful for profile control, particularly in controlling deposition and etching processes at timescales associated with growth of a single monolayer of film.

Processes in plasma-assisted etching or deposition rely on the accurate determination of the distribution of the ion energy and angle close to the substrate surface. Precise control over these parameters could be used to manipulate the bombardment of the process surface. However, from a process engineer’s perspective, the incremental changes in geometric design, voltage, power, feed gas composition, and flow rates must be correlated with IEADF (Ion Energy and Angular Distribution functions).

The engineering team at Tokyo Electron Ltd. uses our non-equilibrium plasma solver, VizGlow™, and particle solver, VizGrain™, to understand underlying physics and find the best operating conditions for Tokyo Electron Ltd. products. In a paper published in the Journal of Physics D: Applied Physics, Dr. Rochan Upadhyay, and Dr. Kenta Suzuki, Esgee Technologies along with researchers at The University of Texas at Austin validated the VizGlow™ simulations used to obtain IEADF in a capacitively coupled plasma reactor.  

 

 

Esgee Technologies uses software products, databases, and consulting projects to solve challenges faced by industrial manufacturers. We are dedicated to the development of plasma and physics simulations for manufacturing applications across a wide range of manufacturing industries, including semiconductors, with a legacy of support for analyzing existing equipment, improving processes, and developing new equipment concepts through the use of our software.

 

 

Thanks for reading! If you’re still curious about the topics discussed in this article, check out the following journal papers (and ask us for a free copy!):

Upadhyay R., K.  Suzuki, L. L. Raja, P.L.G. Ventzek, and A. Ranjan. (2020). Experimentally Validated Computations of Simultaneous Ion and Fast Neutral Energy and Angular Distributions in a Capacitively Coupled Plasma Reactor. Journal of Physics D: Applied Physics. 53. 10.1088/1361-6463/aba068.

 

Interested in learning more about plasma flow simulations? Click here to take a look at our previous article. Feel free to follow us on Twitter and LinkedIn for more related news, or reach out to us directly at info@esgeetech.com.