EsgeeTech to Present at The 75th Annual Gaseous Electronics Conference in Sendai, Japan

EsgeeTech to Present at The 75th Annual Gaseous Electronics Conference in Sendai, Japan

The Gaseous Electronics Conference’s (GEC) 75th Annual Gaseous Electronics Conference begins October 3rd in Sendai, Japan. This occasion marks a special meeting of plasma scientists and engineers gathering to share and promote ideas for industrial segments ranging from plasma sources, diagnostics, simulation, biotechnology, plasma chemistry, and atomic / molecular processes.

Esgee Technologies will be among the invited presenters this year, represented by Dr. Dmitry Levko. Our paper, “Development of validated fluorocarbon plasma chemistry for multi-dimensional modeling of semiconductor plasma etch processes” will be presented on October 6th at 10:30am local time. This paper is being presented as part of the “plasma etching” session within the conference.

In the invited talk, recent progress in the development and understanding of fluorocarbon plasma chemical mechanisms will be discussed. The mechanisms include perfluorocyclobutane (c-C₄F₈) and tetrafluoromethane (CF₄); two important gasses in plasma etching applications. The self-consistent plasma fluid simulation model coupled with a comprehensive finite-rate chemical reaction mechanism is used for the mechanism development and validation. First, the deficiencies of the existing mechanisms of plasma chemical reactions found in the literature will be discussed and the approach to improve these mechanisms will be presented. Second, the results of self-consistent simulations of inductively coupled plasmas in pure c-C₄F₈ and CF₄ with the experimental data available in the literature will be compared. Finally, the influence of various model parameters such as the surface reactions mechanism, gas pressure, discharge power, and electron stochastic heating length scale on the plasma parameters will be analyzed. The influence of these parameters on the kinetics of the dominant plasma species will be presented.

 

We sat down with Dr. Levko, who will be presenting on behalf of EsgeeTech this year, in order to learn more about the applications for this research and how they align with the conference’s goals:

What applications are there for fluorocarbon plasmas? And why choose GEC to discuss them?

Fluorocarbon, low-pressure plasmas are used in the semiconductor industry for etching applications. GEC is the most popular conference among scientists working on plasma engineering applications in both academic and industrial settings.

What is the quick take away from your talk and what new information is being shared?

In my talk, I will discuss EsgeeTech’s efforts in developing and validating mechanisms of plasma chemical reactions in fluorocarbons (C₄F₈ and CF₄), specifically for conditions that are typical of plasma etching reactors.

You used VizGlow in your research. Why choose VizGlow specifically? What scenarios / applications is it useful for?

In plasma simulations, having accurate, “high-fidelity” outcomes requires robust plasma chemistry. VizGlow’s extensive chemistry database is what really differentiates it from other commercially available softwares. VizGlow users like Applied Materials, Lam Research, Kioxia, SK Hynix, Samsung, and Toshiba have all benefited from the availability of over 150 highly complex and industrially relevant plasma chemistries.

VizGlow is designed for high-fidelity, multi-species, multi-dimensional numerical modeling and simulations of plasma reactors that are crucial in these domains. Additionally, EsgeeTech develops plasma chemistries based on the experimental mixtures currently being researched and developed by leaders in the semiconductor industry.

How complex are these plasma chemistries? Why does VizGlow work so well for complex chemistry compared to other softwares? 

Typically, plasma chemistry involves hundreds or maybe thousands of reactions. VizGlow’s development has centered around getting the simulation fidelity right, by incorporating as detailed a mechanism as one can. Despite there being other softwares for simulation of plasmas, EsgeeTech’s development is clearly a physics first, fidelity-centered approach. This allows for computationally efficient coupling of plasma species like electrons and ions with neutral species across various time and spatial scales.

Can you tell us about any important chemistry that you are working on at the moment? 

I’m afraid this is confidential information. I suggest following my work on Google Scholar or Researchgate to learn more about what I am working on.

How do you think VizGlow could help the semiconductor industry? 

In an aggressive industry like semiconductors, a good predictive simulation model can be the difference between a product’s success or failure. The US is spending billions of dollars to spur innovation, but this money is only useful if it is used to pay for high-fidelity applications. VizGlow is a proven workhorse in the semiconductor industry for equipment concept development, design optimization, and semiconductor process / recipe development.

Thanks for reading! If you’re still curious about the topics discussed in this article, check out the following journal papers (and ask us for a free copy!):

Levko, D., (2022, October 6). Development of validated fluorocarbon plasma chemistry for multi-dimensional modeling of semiconductor plasma etch processes [Conference presentation]. GEC 2022 Convention, Sendai, Japan. https://meetings.aps.org/Meeting/GEC22/Session/ER2.3

Levko, Dmitry, et al. “Computational study of plasma dynamics and reactive chemistry in a low-pressure inductively coupled CF4/O2 plasma.” Journal of Vacuum Science & Technology B, Nanotechnology and Microelectronics: Materials, Processing, Measurement, and Phenomena 39.4 (2021): 042202.

Levko, Dmitry, Chandrasekhar Shukla, and Laxminarayan L. Raja. “Modeling the effect of stochastic heating and surface chemistry in a pure CF4 inductively coupled plasma.” Journal of Vacuum Science & Technology B, Nanotechnology and Microelectronics: Materials, Processing, Measurement, and Phenomena 39.6 (2021): 062204.

Levko, Dmitry, et al. “Plasma kinetics of c-C4F8 inductively coupled plasma revisited.” Journal of Vacuum Science & Technology B, Nanotechnology and Microelectronics: Materials, Processing, Measurement, and Phenomena 40.2 (2022): 022203.

Interested in learning more about plasma flow simulations? Click here to take a look at our previous article. Feel free to follow us on Twitter and LinkedIn for more related news, or reach out to us directly at info@esgeetech.com.

The Taguchi Method: Justification for Robustification

The Taguchi Method: Justification for Robustification

Within the scientific method and its techniques for pursuing knowledge, experiments are the vehicle through which empirical facts are established. The early approach of “trial and error” is what design of experiments (DOE) aims to improve upon. By applying statistical analysis to natural phenomena, experimenters can improve the setup, execution, and conclusions drawn from trials – and errors. Experiments in modern times are critical for researchers and manufacturers alike, and occur much earlier in product life cycles than in pre-industrial eras. As good as a product concept may be, manufacturers must provide quality not only by making an efficient product, but by making it consistently throughout the product’s lifecycle.

Through experimentation, something as potentially simple as establishing causation between two factors can create a cascade of effects that ripple through a product’s design, saving costs while improving functionality. But how can these factors be considered and quantified along with the design of the product? And what if these same factors could provide knowledge about what aspects of the product are most central in determining quality and satisfaction?

These were among the questions that Genichi Taguchi considered as he worked to improve Japan’s telephone network in the 1950s. Himself an engineer, Taguchi proposed a design of experiments that coupled critical thought of a product and its crucial factors to a statistical, numerical process. This approach not only aimed to cut costs by establishing a single, optimal iteration of a product, but also sought to cut deviation from that optimal state by considering the relationship between noise (uncontrollable) and signal (controllable) factors in design and improvement.

The Taguchi method utilizes the concept of a loss function to determine quality of a product, which can offer experiment facilitators and data analysts alternative perspectives over data being collected and processed. For Taguchi, loss is measured as a product’s loss to society, which is calculated as variations in performance as well as their effects. A product that functions despite environment and user is considered robust, and for Taguchi, this is the key feature of a high-quality product. At a glance, the Taguchi method presents the case for robustification, and an associated methodology for achieving this result.

 

LASER-FOCUSED ON DESIGN: APPLYING THE TAGUCHI METHOD

If a company were developing a laser used to create tiny patterns on materials (a rudimentary description of the etching process used in semiconductor manufacturing), then the quality of the laser would be, in part, determined by the amount of variance from the standard found in the patterns it creates. In a case where one such laser could cost millions to develop and produce, the Taguchi method would devote greater time to the research and development stage to establish that every laser will etch a pattern that meets specified requirements.

Following the Taguchi method, loss could be measured as any negative effect resulting from design of the product. The potential for an operator to be injured while operating the laser, materials made defunct via incorrect or imprecise patterns are two clear ways that loss could occur, and would receive special attention at design and early iteration stages for a product.

Additional considerations for loss would include loftier aspects of negative results from the product. Waste produced, loss of future sales due to a drop in brand confidence, and any post-production costs to fix problems with the product can and would be included in Taguchi’s loss function.

 

SEEING QUALITY AS NON-LINEAR

Certain aspects of the Taguchi method are philosophical in nature, and describe the way that a company should analyze or conceptualize their products. They include three main points that are sometimes referred to as fundamental concepts. They are:

1) Quality Must be Designed into the Product: 

Understanding the aspects of product design that influence quality implies an understanding of the product itself. Knowing the product, the user, and its intended use-cases may seem a simple task, but accounting for them in a pre-manufacturing stage – that is, before the product, user, or use-case exist – is part of the overall reimagination of how products should be created.

Implicit in the Taguchi method is the belief that manufacturing processes are flawed and can only introduce problems into design. Thus, adjustments and iterations take place at a preceding stage before they reach any potential manufacturing or assembly line. Consequently, this approach is also called “off-line design” or “off-line quality control.”

2) Quality is Realized by Minimizing Deviation from the Target: 

Investments that reduce variation from a target optimal state in a product have favorable return on investment (ROI), especially when customer satisfaction, replacements, and post-production improvements are factored into cost. Along with bolstering brand loyalty, addressing these factors early – and continuously – makes design robust and helps eliminate loss resulting from the aforementioned pathways.

(Fig. 1) Taguchi’s loss function, L(Y), features a quadratic formula that illustrates a product’s performance (Y) as it deviates from its target (t). The vertical bars (D) show customer tolerance. Their intersection with monetary loss (M) represents when this tolerance is exceeded.

3) Quality is a Function of Deviation:

Placing a primary emphasis on the relationship between quality and cost of failure establishes a guide for optimal improvement of a product. The Taguchi method measures losses at the systemic level and can factor in any costs associated with the return of a product; warranty, re-inspection, replacement, and even costs extended to the customer are all factors that contribute to loss under the Taguchi method.

 

FACTORS IN THE TAGUCHI METHOD

Taguchi’s approach allows for experiments where facilitators can choose factors that are more consistent, and approaches design with consideration for uncontrollable factors. There are three central aspects of the overall structure:

Systems Design: 

The “brainstorming” and synthesis of a product or process to be used. Systems design occurs early on during conceptualization of a product. This stage focuses on achieving functionality through innovation. After these creative avenues have been exhausted, the basis for parameter design should be established, as it is the next stage in Taguchi’s process.

Parameter Design: 

Parameter design in the Taguchi method achieves the goal of creating a product that is robust enough for both the environment and the user. Designing a set of rules that determine design elements, then defining each rule using parameters and components helps quantify and diagnose variation in a given product. The term “parametric design” is often used interchangeably with “robust design” given its focus.

However, Taguchi also makes use of orthogonal arrays, which fall under a greater scheme of orthogonal array testing strategies (OATS) and are meant to provide an alternative to other quality control methods which can be prohibitive as a result of setup cost, time constraints, or other factors that make them otherwise impractical. In this sense, Taguchi’s orthogonal arrays are an alternative to full factorial experimental design, which simply – and exhaustively – tests every possible combination of states and variables.

The robustness of a product is determined using a signal-to-noise ratio (signal / noise or S/N) which is determined by (mean / variation) as well as mean response, or mean output variables. Whereas other methodologies may look to minimize noise in the experiment, Taguchi’s approach makes use the signal – or desired value – and noise – or undesired value. The resulting distribution around desired values show which control factors are most robust to noise factor variation.

Tolerance Design:

Tolerance design generally comes after parameter design studies. This stage specifies tightening tolerances for a product in order to improve quality, and also identifies crucial tolerances in a product or process design. A complex process or product could already have tight tolerance requirements, but selecting materials (preferably during system or parameter design stages) that have high tolerance to variability can quantify how crucial improvements are to achieving a robust design. In Taguchi’s experience, simply meeting tolerances is not as favorable as an approach seeking to meet the target while minimizing variance around it.

Although it may intuitively make sense that a product with tolerated deviation of ± 1 micrometers could be made better by tolerating only ± .5 micrometers, such a change could be cost prohibitive for a manufacturer. Without a preliminary parameter design and subsequent testing, the effect may also be minimal in increasing overall quality of the product.

Taguchi’s consideration for product deviation from target values is also contrary to a mindset in manufacturing that treats quality as a binary process, where items are either within or beyond specification. Taguchi includes tolerance ranges, with different levels of tolerance for components of varying importance to the overall design. As a result, quality under the Taguchi method is a curve, and takes on a parabolic shape when factored into the loss function (Fig. 1, above).

 

TAGUCHI’S APPROACH IN THE DIGITAL AGE

Many modern manufacturing life cycles reflect values inherent in the Taguchi method. Greater emphasis on R&D and baselining means that many companies go through more iterations of a product or prototype before continuing to the production and logistics stages. In such cases, this is generally a result of the cost-effectiveness that greater off-line quality control offers. For some specific industries like software and digital products, physical manufacturing may not even factor into a product’s lifecycle. However, system, parameter, and tolerance controls under Taguchi’s approach are still applicable, and quality assurance continues to play a major role in identifying and fixing problems in digital environments.

Established automobile manufacturers represent the opposite side of the spectrum, where complex manufacturing processes take considerable effort and resources. Mistakes in design (for Taguchi, the only mistakes there are) can result in global recalls of their vehicles and unforeseen repair costs. Any manufacturer facing bankruptcy as a result of a recall would likely determine that their testing and design experiments were not robust enough. The incurred costs would also factor into the Taguchi loss function for the company.

 

ROBUST MULTIPHYSICS DESIGNS

A thorough understanding of a product’s physics is relevant to its adoption. In the case of an electric circuit breaker, it is important to understand how fast it disconnects from the circuit, how resilient it is to mechanical impacts, and its effectiveness despite adverse weather conditions. Given the variety of conditions that equipment could be exposed to, experimental testing – and implementation of the Taguchi method – for the circuit breaker design becomes challenging. A thorough sweep of parameters via experimental investigation becomes non-plausible given the manual effort and experimental costs involved.

A high-fidelity multiphysics solver presents a solution for design insights in these cases. Understanding the physics behind the product for various operating and abuse conditions not only makes a product robust, but also catalyzes a revolution in product design. VizSpark™, as a high-fidelity thermal plasma flow solver, is already being used in the industry to provide further insights on conventional design, achieve faster design iterations, and reduce product iteration cycle times.

The figures below show published work by Ranjan et al., where VizSpark™ was used to simulate the electric disconnection in electric vehicle relays. The varying factor across the simulations are their gas compositions. The assessment was made for different levels of hydrogen in the hydrogen-nitrogen mixtures. Taguchi’s approach could be implemented in a similar way for different levels of purity of a certain gas-mixture. The use of multiphysics solvers and simulations offers system, parameter, and tolerance insights without the attached costs of physical experiments.

Thanks for reading! If you’re still curious about the topics discussed in this article, check out the following journal papers (and ask us for a free copy!):

Ranjan, R., Thiruppathiraj, S., Raj, N., Karpatne, A. et al., “Modelling of Switching Characteristics of Hydrogen-Nitrogen Filled DC Contactor Under External Magnetic Field,” SAE Technical Paper 2022-01-0728, 2022
 
 
 
 

 

Interested in learning more about plasma flow simulations? Click here to take a look at our previous article. Feel free to follow us on Twitter and LinkedIn for more related news, or reach out to us directly at info@esgeetech.com.

EsgeeTech Presenting at IPMHVC 2022

EsgeeTech Presenting at IPMHVC 2022

This week, IEEE begins its International Power Modulator and High Voltage Conference, along with the jointly held Electrical Insulation Conference (IPMHVC/EIC) in Knoxville, Tennessee. Engineers and scientists involved in applications for power modulators and high voltage technologies will converge to share and discuss their knowledge and work over the next few days.

Esgee Technologies will be among the presenters this year, represented by Douglas Breden. Our papers, “Computational Study of Plasma Flow in Arcing Horns During a Voltage Surge” and “Numerical Simulation of Arcing During Contact Separation in SF6-Filled High Voltage Circuit Breaker” will be presented back-to-back on June 21st at 10am. These papers are part of the “Plasmas, Discharges, and Electromagnetic Phenomena” session within the conference.

Both of our papers include simulations made with VizSpark, our plasma-flow solver for thermal (arc) plasmas. These talks are our first open demonstration to the high-voltage community and insulation researchers of how thermal arcs can be modeled with high-fidelity multiphysics software. We present the three-dimensional simulation of arcing in high-voltage interrupters and the plasma-flow simulation of arcing between arcing horns.

Interested in learning more about plasma flow simulations? Click here to take a look at our previous article. Feel free to follow us on Twitter and LinkedIn for more related news, or reach out to us directly at info@esgeetech.com.

Using VizSpark to Model Electrical Discharge in Combustion Engines

Using VizSpark to Model Electrical Discharge in Combustion Engines

Argonne National Laboratory represents the United States’ Department of Energy’s commitment to cooperative research and scientific discovery. Since its inception in 1946, Argonne has pioneered laboratory research and experimentation as the first national laboratory in the United States. While a significant amount of research in the decades following its founding centered around nuclear energy and applications, Argonne has transitioned from nuclear research to include additional energy sources and storage since the beginning of the 21st century. Now, Argonne constitutes a scientific community of leading researchers, with projects across a spectrum of computational, quantum, and interdisciplinary fields.

Among the contributors in this area are Dr. Joohan Kim and Dr. Riccardo Scarcelli. Their work on modeling spark discharge processes in spark-ignition (SI) engines was recently recognized by Argonne. Dr. Kim received a Postdoctoral Performance Award in the area of Engineering Research, along with ten other postdoctoral appointees whose contributions set a standard not only for the quality of their discoveries, but also for the ingenuity of their techniques and demonstrated leadership capabilities. According to Argonne, awardees’ works have upheld core values of scientific impact, integrity, respect, safety, and teamwork.

Within the highly competitive automotive industry, the need for innovation through design presents opportunities for new tools and technologies to be utilized. Regulations from governing entities seek to strike a balance between meeting climate goals through greater restrictions on CO2 emissions from automobiles, while relying on the transportation industry and automotives to fuel trade and commerce. With restrictions focused solely on reducing emissions, applications that meet these criteria without sacrificing capabilities stands out for both manufacturers and legislators alike.

Dr. Kim’s work highlights the need for predictive models which can optimize operational parameters for SI systems in order to maximize thermal efficiency gain and lower engine development costs. Creating these predictive models requires advanced simulation software capable of solving and coupling electromagnetic physics and fluid dynamics into a computational framework. When we asked about his use of simulations, Dr. Kim said, “high-fidelity simulations enable us to perform in-depth analysis of the spark-ignition process, including energy transfer, birth of flame kernel, and thermo-chemical properties; these would be difficult to obtain using experimental techniques only.” He went on to add that, “with a fundamental understanding of complex physics, we can develop predictive models that make simulation-based optimization robust and reliable.”

“VizSpark provided a fully-coupled framework between electromagnetic physics and fluid dynamics, and thereby we were able to diagnose the plasma properties occurring within tens of nanoseconds without many assumptions.”

Dr. Kim’s study utilized VizSpark simulations to accurately estimate electrical discharge shape, as well as temperature and pressure of plasma kernels, thus providing a set of robust initial and boundary conditions for studying flame kernel growth under engine-like conditions. He noted “VizSpark provided a fully-coupled framework between electromagnetic physics and fluid dynamics, and thereby we were able to diagnose the plasma properties occurring within tens of nanoseconds without many assumptions.”

VizSpark is a robust, industrial simulation tool for high-fidelity modeling of thermal (arc) plasmas. Additionally, VizSpark is fully parallelized and can be used to perform large, 3D simulations with complex geometries. Its comprehensive solvers and scalability make it ideal for solving real world engineering problems.

Interested in learning more about plasma flow simulations? Click here to take a look at our previous article. Feel free to follow us on Twitter and LinkedIn for more related news, or reach out to us directly at info@esgeetech.com.

Mirroring the World with Digital Twins

Mirroring the World with Digital Twins

Twins in literature and mythology are a shared theme across cultures and ontologies, exploring early concepts like duality, polarity, and unity. However, equal to these themes were explorations of concepts like loss, fratricide, and self-realization through remorse. Indeed, for every Castor and Pollux, there is a Cain and Abel, or a Romulus and Remus. Twins in myth evoke an impressionistic reaction to the triumphs and tragedy that they represent. Efforts of the current decade may tell us which of the two will ultimately characterize the concept of digital twins and their implementation.

Since being coined in 2003 by NASA executive Michael Grieves, the term “digital twin” has become an ambiguous term for the future of simulation and modeling applications. While Grieves’ earliest intention was in improving product life-cycles, the idea of high-fidelity, virtual representations of physical objects seemed like a certain future for computational modeling given technological capabilities and their increasing role in product design and iteration processes.

What was once Grieves’ insight into the future of technological applications has become a catch-all for any number of virtual models for physical entities, as well as the flow of data between them that provides parity. The resulting ambiguity in the phrase is due to its widespread usage across industries and the dynamic nature of evolving methodologies to reach the virtual “mirrored” / “twinned” ideal.

As with any other technology, there are limitations to simulations and computational models that tend to be overshadowed by their perceived benefits and desired insights. In departure from the abstract, requirements and standardizations for what constitutes a digital twin are yet to be seen. What’s more is that the concept of a digital twin is arguably not new at all, but simply an aggregation of techniques and research already in existence.

 

 

 

SPECULUM SPECULORUM

An issue with the popularity of terms like “digital twin” is that they risk becoming a misnomer due to a lack of common development methodology, much like the internet of things (IoT) platforms they rely on which require no internet connection at all. Digital twins face difficulties in procuring enough data from sensors to mirror physical entities, but also procuring and applying the correct data to become accurate representations. For example, a digital twin for a car’s braking system could use data to predict when maintenance will be needed by using predictive models for determining wear on the brake pad. However, even a specific system would rely on numerous external factors like environment, temperature, and lubrication, as well as an IoT platform for sensors that communicate and collect data from connected assets, or parts. The absence of any one of these parameters could result in incomplete or erroneous data that leads to faults in the virtual entity. Identifying missing parameters and diagnosing inconsistencies between physical and virtual entities can make their usage prohibitive in terms of both cost and labor.

The figure below shows hypothetical examples of digital twin implementations for an atomic layer deposition reactor, a complex machine used to deposit thin films onto materials.

 

At its core, digital twins are real-time, virtual representations of physical entities enabled by sensors and data. Twins can take on specific roles depending on the type of problem they solve or the advantages they offer. Adopting the model introduced by Oracle, there are three primary implementations for twins:

 

Virtual Twins

A virtual representation of a physical entity or asset. These contain manually provided data to the virtual twin from the physical counterpart best described as parameters, and requires the ability for the virtual twin to establish a connection in order to retrieve information from the physical environment. The type and number of parameters sent across this connection – as well as their accuracy – are primary attributes in grading and defining the “fidelity” of the virtual entity.

 

Predictive Twins

As the name suggests, this implementation focuses on creating predictive models and is not a static representation of a physical entity, but one based on data gathered from historic states. These twins serve to detect problems that could occur at a future state and proactively protect against them or allow designers the opportunity to diagnose and prevent the problem. Predictive twins are potentially much more simple than other implementations, and can focus on specific parameters like machine data rather than constantly receiving information from sensors and recreating a full virtual environment.

 

Twin Projections

This implementation is also used to create predictive models, but relies heavily on IoT data exchange between individually addressable devices over a common network, rather than sensors or physical environments. Applications or software that generate insights from the IoT platforms generally have access to aggregate data that is used to predict machine states and alleviate workflow issues.

There are a number of issues that each implementation faces. Maintaining connectivity to sensors for data transfer from physical entities, volume of network traffic between devices, and identification of key parameters are make-or-break in implementing successful twins. The yet ununified methods of collecting data further exacerbate the situation, with most vehicles for standardization lying in sharing models and information. 

The issue that results from relying on such collaborations has to do with data ownership; an issue already marred by controversies both moral and legal. Nonetheless, the promises of improvements for behavior, conformity, design, manufacturability, and structure have already attracted major attention from researchers.

 

 

 

BEAUTY IN COMPLEXITY

Given the broad applications and ambitious tech behind the concept, the question of what cannot be digitally twinned is interesting to consider, especially given that a digital twin of Earth is already in production. The answer depends ultimately on what a digital twin’s use-case is, and to what degree it is able to achieve and produce desired results.

Using this as a criteria doesn’t help the already broad definition of what constitutes a digital twin; one could argue that established technologies like Google Maps and Microsoft Flight Simulator are digital twins. While this may detract from its novelty, digital twin as a term also carries an undertone of possibility through connectivity. Excitement surrounding digital twins is heavily tied to the anticipation of a new level of interconnectedness between devices that enables automation and machine learning. This is seen as a new phase for technology – even a new, fourth industrial revolution, commonly referred to as Industry 4.0.

Still, the complexity of digital twins creates a high barrier for production and implementation for many prospective innovators. A general misconception is that digital twin production requires that a company simply hire data scientists and provide them an analytics platform. Domain expertise and product lifecycle management tend to be overlooked as a result.

Configuration of assets on a product also impact design and are subject to changes in scale and capabilities. Divergence from original, pilot assets can create a cascading effect of incorrect or outdated information between iterations or generations of a product. Asset changes are not always anticipated, certain assets outlast others, and asset replacement in cases of failure can mean drastic changes in design. In the case of products that go through several generations or are sold for decades on the market, synchronization of digital twins is the only solution. This could occur as often as changes are made to the product itself.

It can be challenging to coordinate with manufacturing processes and across iterations or versions as a product makes its way to the consumer. One of the primary use-cases for digital twins in manufacturing has to do with shop floor optimization. Similar focuses on improving operations are found for supply chain use-cases seeking to optimize warehouse design. Generally, study and expertise surrounding these kinds of improvements and optimizations falls under maintenance, repair, and operations (MRO).

 

 

 

SIMULATION-BASED DIGITAL TWINS

Computational simulations are a core feature that facilitates the development of digital twins. By combining high-fidelity simulations and fully coupled multiphysics solvers, companies can create models for assets and tweak them using their own data. Simulation insights create robust iteration phases that can cut process and testing costs, ultimately leading to shorter cycle times and greater management of product life cycles. Regardless of the size of a company or the scale of its products, simulations can connect the earliest designs made by research and development teams to final iterations made by manufacturing teams by providing clear, relevant physical and chemical insights.

“Ultimately, an industrial simulation that does not incorporate high-fidelity physics is essentially digital art.”

Given the increasing market focus on visual and virtual utility, impressive graphics could be misleading when it comes to digital twins. Ultimately, an industrial simulation that does not incorporate high-fidelity physics is essentially digital art. Within technical domains, the centermost aspect of a digital twin should be the fidelity with which it can predict not only steady-state processes, but also edge cases where physics is set to be challenging.

Of all the engineering design problems with applications for digital twins, problems experienced within the semiconductor industry are perhaps the most complex. In this industry’s “race to the bottom,” providing high-fidelity models requires the capability to determine the effects of disruptors like chemical impurities – which can threaten the functionality of critical components like wafers – at a margin of one part per trillion (or one nanogram per kilogram). Additional processes like atomic layer deposition are extremely sensitive to local species concentration as well as pressure profiles in the vicinity of the wafer being produced. While these are examples of restrictions based on the difficulty of working at an atomic scale, insight and perspective in the design and manufacturing process for semiconductors represents one of the most rigorous testing grounds for digital twins.

 

 

 

Thanks for reading! If you’re still curious about the topics discussed in this article, check out the following journal papers (and ask us for a free copy!):

Rasheed, Adil, Omer San, and Trond Kvamsdal. “Digital twin: Values, challenges and enablers from a modeling perspective.” Ieee Access 8 (2020): 21980-22012.

 

Rajesh, P. K., et al. “Digital twin of an automotive brake pad for predictive maintenance.” Procedia Computer Science 165 (2019): 18-24.

Interested in learning more about plasma flow simulations? Click here to take a look at our previous article. Feel free to follow us on Twitter and LinkedIn for more related news, or reach out to us directly at info@esgeetech.com.

Plasma Processing with Carbon and Fluorine

Plasma Processing with Carbon and Fluorine

 As the semiconductor industry continues to shrink critical feature sizes and improve device performance, challenges in etch processing are increasing as a result of smaller features being processed with new device structures. Higher density and higher-aspect ratio features are introducing new challenges that require additional innovation in multiple areas of wafer processing. As a result of their complexity, these innovations are increasingly reliant on comprehensive physical, chemical, and computational models of plasma etch processes.

Plasma etching is a critical process used in semiconductor manufacturing for removing materials from unit surfaces and remains the only commercially viable technology for anisotropic removal of materials from surfaces.  Although plasma was introduced into nano-electric fabrication processes in the mid-1980s and transistor size has shrunk by nearly two orders of magnitude, starting at 1.0 μm to ∼0.01 μm today, the progress was mainly driven by trial and error. Unfortunately, detailed mechanisms for plasma etch processes are not well understood yet for a majority of process gasses. Therefore, the development, improvement, and validation of these mechanisms remains a constant endeavor. This would open up more opportunities for innovation in this area.

Every Last, Atomic Detail

The growing costs of etching are threatening to slow the rate of improvement for density and process speed, though manufacturing expenses can be mitigated using simulation tools. Each generation of devices requires more layers, more patterning, and more cycles of patterning that continue to increase overall cost and complexity. Even if component size can be reduced further, this presents manufacturers with additional costs in developing even more precise lithography and etching machines. This highlights the balance between atomic layer processing in high volumes and the need for a renewed approach to miniaturization in order to extend Moore’s Law.

Plasma etching takes place as part of the process of wafer fabrication, which in turn is a main process in the manufacturing procedure for semiconductors. For a wafer to be finalized, cycles must be completed potentially hundreds of times with different chemicals. Each cycle increases the number of layers and features that the desired circuit carries. 

Wafers begin as pure, non-conductive, thin silicon discs generally ~6 to 12 inches in diameter. These wafers are made of crystalline silicon, with extreme attention to chemical purity before oxidation and coating can occur. Oxidation is one of the oldest steps in semiconductor manufacturing, and has been used since the 1950s. Silicon has a great affinity for oxygen, and thus it is absorbed readily and transferred across the oxide. Layers of insulation and conductive materials are then coated onto the wafer before a photoresist – a mask for etching into the oxide – can be applied.  Photoresist turns into soluble material when exposed to ultraviolet light, so that exposed areas can be dissolved using a solvent. The resulting pattern is what gives engineers control at later stages like etching and doping, when devices are formed. Integrated circuit patterns are first mapped onto a glass or quartz plate, with holes and transparencies that allow light to pass through, with multiple plates masking each layer of the circuit. The aforementioned ultraviolet light is then applied to transfer patterns from photoresist material coatings onto the wafer, with the photoresist chemicals also being removed prior to etching. It is at this point that a feed gas stream – a mixture of gasses with a carrier (like nitrogen) and an etchant (or other reactive gas) – is introduced to create chemical reactions that remove materials from the wafer. 

During the etching process, areas left unprotected by the photoresist layer are chemically removed. Etching generally refers to removal of materials, however it requires that photomask layers and underlying materials remain unaffected in the process. In some cases, as with anisotropic etches, materials are removed in specific directions to produce geometric features like sharp edges and flat surfaces, which can also increase etch rates and lower cycle times. Metal deposition and etching includes placing metal links between transistors, and is one of the final steps before a wafer can be completed.

Both physical and chemical attributes are present in the etching process. The active species (atoms, ions, and radicals) are generated in the electron impact dissociation reaction of feed gasses. Feed gas mixtures for plasma etching are usually complex due to the conflicting requirements on etch rate, selectivity to mask and underlayer, and anisotropy. Also, the plasma itself dissociates the feed gas into reactive species which can react with each other in the gas phase and on the surface, leading to a further cascade of species generation in the plasma.

The most common etchant atoms are fluorine (F), chlorine (Cl), bromine (Br), and oxygen (O), which are usually produced by using the mixtures of chemically reactive gasses, such as CF, O, Cl, CCl, HBr, and CHCl. Inductively coupled as well as capacitively coupled plasma reactors (ICP and CCP, respectively) have found the most widespread use in semiconductor manufacturing. ICP sources allow the generation of relatively dense plasmas (∼10¹⁶–10¹⁷ m⁻³) at relatively low gas pressures (1–10 mTorr). With independent wafer biasing, they also allow independent control of the ion flux and ion energy at the wafer surface. This process can be engineered to be chemically selective in order to remove different materials at different rates.

Molecular Design in Mind

One of the most important applications of plasma etching is the selective, anisotropic removal of patterned silicon or polysilicon films. Halogen atom etchants (F, Cl, Br) bearing precursors’ feedstock gasses are almost always used for this purpose. Common feedstock gasses for F atoms are CₓF, SF, and NF. The understanding of physical and chemical processes in reactive plasmas requires reliable elementary finite-rate chemical reaction mechanisms. Tetrafluoromethane (CF) is one of the most frequently used gasses for the generation of F atoms. The admixture of a small percentage of oxygen to a CF plasma dramatically increases the etch rates of silicon surfaces, and can also be used to control the lateral etching of silicon.

Distribution of electron temperatures in an ICP reactor modeled using VizGlow.

Tetrafluoromethane (CF) is an important feedgas for plasma etching of silicon. It is relatively easy to handle, non-corrosive, and has low toxicity. CF₄ has no stable electronic states which means that the electron energy is spent on the generation of chemically active ions and radicals without electronic excitation losses. While tetrafluoromethane plasmas have been studied since the early development of plasma etching processes, the influence of various gas-phase and surface reactions on the densities of active species is still poorly understood.

VizGlow is a full-featured, high-fidelity simulation tool for the modeling of chemically reactive plasmas, which are present in half of the steps undertaken in the semiconductor fabrication process described above. The characteristics of gas species and kinetic modeling of their reactions remain an area with yet unexplored potential for further innovation. Radicals created by plasmas are extremely reactive due to unpaired electrons, which is used by semiconductor engineers to speed up the process and cycle times. The same is true for deposition processes, where radicals prevent damage to the chip as it cools from the >1000 °C temperatures produced within etching equipment. Throughout these processes, defects, impurities, and nonuniformities can be detected and diagnosed with help from simulated models. Simulations using VizGlow can help guide the design iterations to avoid operating conditions that could comprise wafers even after months of processing.

Thanks for reading! If you’re still curious about the topics discussed in this article, check out the following journal papers (and ask us for a free copy!):

Levko, Dmitry, et al. “Computational study of plasma dynamics and reactive chemistry in a low-pressure inductively coupled CF4/O2 plasma.” Journal of Vacuum Science & Technology B, Nanotechnology and Microelectronics: Materials, Processing, Measurement, and Phenomena 39.4 (2021): 042202.

Levko, Dmitry, Chandrasekhar Shukla, and Laxminarayan L. Raja. “Modeling the effect of stochastic heating and surface chemistry in a pure CF4 inductively coupled plasma.” Journal of Vacuum Science & Technology B, Nanotechnology and Microelectronics: Materials, Processing, Measurement, and Phenomena 39.6 (2021): 062204.

Levko, Dmitry, et al. “Plasma kinetics of c-C4F8 inductively coupled plasma revisited.” Journal of Vacuum Science & Technology B, Nanotechnology and Microelectronics: Materials, Processing, Measurement, and Phenomena 40.2 (2022): 022203.

Lee, Chris GN, Keren J. Kanarik, and Richard A. Gottscho. “The grand challenges of plasma etching: a manufacturing perspective.” Journal of Physics D: Applied Physics 47.27 (2014): 273001.

Kanarik, Keren J. “Inside the mysterious world of plasma: A process engineer’s perspective.” Journal of Vacuum Science & Technology A: Vacuum, Surfaces, and Films 38.3 (2020): 031004.

 

Marchack, N., et al. “Plasma processing for advanced microelectronics beyond CMOS.” Journal of Applied Physics 130.8 (2021): 080901.

Interested in learning more about plasma flow simulations? Click here to take a look at our previous article. Feel free to follow us on Twitter and LinkedIn for more related news, or reach out to us directly at info@esgeetech.com. This post’s feature image is by Laura Ockel & Unsplash.