Mirroring the World with Digital Twins

Mirroring the World with Digital Twins

Twins in literature and mythology are a shared theme across cultures and ontologies, exploring early concepts like duality, polarity, and unity. However, equal to these themes were explorations of concepts like loss, fratricide, and self-realization through remorse. Indeed, for every Castor and Pollux, there is a Cain and Abel, or a Romulus and Remus. Twins in myth evoke an impressionistic reaction to the triumphs and tragedy that they represent. Efforts of the current decade may tell us which of the two will ultimately characterize the concept of digital twins and their implementation.

Since being coined in 2003 by NASA executive Michael Grieves, the term “digital twin” has become an ambiguous term for the future of simulation and modeling applications. While Grieves’ earliest intention was in improving product life-cycles, the idea of high-fidelity, virtual representations of physical objects seemed like a certain future for computational modeling given technological capabilities and their increasing role in product design and iteration processes.

What was once Grieves’ insight into the future of technological applications has become a catch-all for any number of virtual models for physical entities, as well as the flow of data between them that provides parity. The resulting ambiguity in the phrase is due to its widespread usage across industries and the dynamic nature of evolving methodologies to reach the virtual “mirrored” / “twinned” ideal.

As with any other technology, there are limitations to simulations and computational models that tend to be overshadowed by their perceived benefits and desired insights. In departure from the abstract, requirements and standardizations for what constitutes a digital twin are yet to be seen. What’s more is that the concept of a digital twin is arguably not new at all, but simply an aggregation of techniques and research already in existence.

 

 

 

SPECULUM SPECULORUM

An issue with the popularity of terms like “digital twin” is that they risk becoming a misnomer due to a lack of common development methodology, much like the internet of things (IoT) platforms they rely on which require no internet connection at all. Digital twins face difficulties in procuring enough data from sensors to mirror physical entities, but also procuring and applying the correct data to become accurate representations. For example, a digital twin for a car’s braking system could use data to predict when maintenance will be needed by using predictive models for determining wear on the brake pad. However, even a specific system would rely on numerous external factors like environment, temperature, and lubrication, as well as an IoT platform for sensors that communicate and collect data from connected assets, or parts. The absence of any one of these parameters could result in incomplete or erroneous data that leads to faults in the virtual entity. Identifying missing parameters and diagnosing inconsistencies between physical and virtual entities can make their usage prohibitive in terms of both cost and labor.

The figure below shows hypothetical examples of digital twin implementations for an atomic layer deposition reactor, a complex machine used to deposit thin films onto materials.

 

At its core, digital twins are real-time, virtual representations of physical entities enabled by sensors and data. Twins can take on specific roles depending on the type of problem they solve or the advantages they offer. Adopting the model introduced by Oracle, there are three primary implementations for twins:

 

Virtual Twins

A virtual representation of a physical entity or asset. These contain manually provided data to the virtual twin from the physical counterpart best described as parameters, and requires the ability for the virtual twin to establish a connection in order to retrieve information from the physical environment. The type and number of parameters sent across this connection – as well as their accuracy – are primary attributes in grading and defining the “fidelity” of the virtual entity.

 

Predictive Twins

As the name suggests, this implementation focuses on creating predictive models and is not a static representation of a physical entity, but one based on data gathered from historic states. These twins serve to detect problems that could occur at a future state and proactively protect against them or allow designers the opportunity to diagnose and prevent the problem. Predictive twins are potentially much more simple than other implementations, and can focus on specific parameters like machine data rather than constantly receiving information from sensors and recreating a full virtual environment.

 

Twin Projections

This implementation is also used to create predictive models, but relies heavily on IoT data exchange between individually addressable devices over a common network, rather than sensors or physical environments. Applications or software that generate insights from the IoT platforms generally have access to aggregate data that is used to predict machine states and alleviate workflow issues.

There are a number of issues that each implementation faces. Maintaining connectivity to sensors for data transfer from physical entities, volume of network traffic between devices, and identification of key parameters are make-or-break in implementing successful twins. The yet ununified methods of collecting data further exacerbate the situation, with most vehicles for standardization lying in sharing models and information. 

The issue that results from relying on such collaborations has to do with data ownership; an issue already marred by controversies both moral and legal. Nonetheless, the promises of improvements for behavior, conformity, design, manufacturability, and structure have already attracted major attention from researchers.

 

 

 

BEAUTY IN COMPLEXITY

Given the broad applications and ambitious tech behind the concept, the question of what cannot be digitally twinned is interesting to consider, especially given that a digital twin of Earth is already in production. The answer depends ultimately on what a digital twin’s use-case is, and to what degree it is able to achieve and produce desired results.

Using this as a criteria doesn’t help the already broad definition of what constitutes a digital twin; one could argue that established technologies like Google Maps and Microsoft Flight Simulator are digital twins. While this may detract from its novelty, digital twin as a term also carries an undertone of possibility through connectivity. Excitement surrounding digital twins is heavily tied to the anticipation of a new level of interconnectedness between devices that enables automation and machine learning. This is seen as a new phase for technology – even a new, fourth industrial revolution, commonly referred to as Industry 4.0.

Still, the complexity of digital twins creates a high barrier for production and implementation for many prospective innovators. A general misconception is that digital twin production requires that a company simply hire data scientists and provide them an analytics platform. Domain expertise and product lifecycle management tend to be overlooked as a result.

Configuration of assets on a product also impact design and are subject to changes in scale and capabilities. Divergence from original, pilot assets can create a cascading effect of incorrect or outdated information between iterations or generations of a product. Asset changes are not always anticipated, certain assets outlast others, and asset replacement in cases of failure can mean drastic changes in design. In the case of products that go through several generations or are sold for decades on the market, synchronization of digital twins is the only solution. This could occur as often as changes are made to the product itself.

It can be challenging to coordinate with manufacturing processes and across iterations or versions as a product makes its way to the consumer. One of the primary use-cases for digital twins in manufacturing has to do with shop floor optimization. Similar focuses on improving operations are found for supply chain use-cases seeking to optimize warehouse design. Generally, study and expertise surrounding these kinds of improvements and optimizations falls under maintenance, repair, and operations (MRO).

 

 

 

SIMULATION-BASED DIGITAL TWINS

Computational simulations are a core feature that facilitates the development of digital twins. By combining high-fidelity simulations and fully coupled multiphysics solvers, companies can create models for assets and tweak them using their own data. Simulation insights create robust iteration phases that can cut process and testing costs, ultimately leading to shorter cycle times and greater management of product life cycles. Regardless of the size of a company or the scale of its products, simulations can connect the earliest designs made by research and development teams to final iterations made by manufacturing teams by providing clear, relevant physical and chemical insights.

“Ultimately, an industrial simulation that does not incorporate high-fidelity physics is essentially digital art.”

Given the increasing market focus on visual and virtual utility, impressive graphics could be misleading when it comes to digital twins. Ultimately, an industrial simulation that does not incorporate high-fidelity physics is essentially digital art. Within technical domains, the centermost aspect of a digital twin should be the fidelity with which it can predict not only steady-state processes, but also edge cases where physics is set to be challenging.

Of all the engineering design problems with applications for digital twins, problems experienced within the semiconductor industry are perhaps the most complex. In this industry’s “race to the bottom,” providing high-fidelity models requires the capability to determine the effects of disruptors like chemical impurities – which can threaten the functionality of critical components like wafers – at a margin of one part per trillion (or one nanogram per kilogram). Additional processes like atomic layer deposition are extremely sensitive to local species concentration as well as pressure profiles in the vicinity of the wafer being produced. While these are examples of restrictions based on the difficulty of working at an atomic scale, insight and perspective in the design and manufacturing process for semiconductors represents one of the most rigorous testing grounds for digital twins.

 

 

 

Thanks for reading! If you’re still curious about the topics discussed in this article, check out the following journal papers (and ask us for a free copy!):

Rasheed, Adil, Omer San, and Trond Kvamsdal. “Digital twin: Values, challenges and enablers from a modeling perspective.” Ieee Access 8 (2020): 21980-22012.

 

Rajesh, P. K., et al. “Digital twin of an automotive brake pad for predictive maintenance.” Procedia Computer Science 165 (2019): 18-24.

Interested in learning more about plasma flow simulations? Click here to take a look at our previous article. Feel free to follow us on Twitter and LinkedIn for more related news, or reach out to us directly at info@esgeetech.com.

Fulfilling the Need for Plasma Experts

Fulfilling the Need for Plasma Experts

The Fourth State; The First Priority

With renewed investment in the semiconductor industry by both the American government and private investors, opportunities for the next generation of American engineers are set to soar as the United States attempts to reclaim a greater share of global production. Accelerating the increasing demands for semiconductor engineers and technical specialists is a growing shortage of qualified workers. Despite the fact that semiconductors have become a key to supporting global critical infrastructure, injecting billions into the industry alone will not solve the problem. These factors are culminating in a pivotal moment for the trajectory of our shared technological future.

Along with the increase in funding and opportunity is another technical domain that the United States can target with their investments and incentives; plasma. Plasma cleaning, etching, and deposition, which play a heavy role in the early stages of material removal (or addition) from surfaces, have created large intersections between the realms of applied engineering and chemistry. Although plasma techniques are not new to the manufacturing process, expertise in these areas has been outsourced to other stops in the global semiconductor supply chain. Plasma is a key component enabling control and manipulation of physics at the atomic level, at a time where transistors have already pushed into the dimensions of single nanometers. Despite the dearth of expertise surrounding plasma devices and given the ever-increasing expectations from the semiconductor industry, plasma is set to play an even greater role in the semiconductor manufacturing process.

 

 

Clairvoyance in the Semiconductor Industry

In an industry that requires such high-level specialization, partnerships with academia and programs that introduce the field to young talent are an important step in fostering interest in these positions. A 2017 survey of US-based semiconductor manufacturers exposed that despite 75% of companies planning increased spending for L&D, there was a critical lack of resources and infrastructure to offer training. Semiconductor manufacturers may offer an unparalleled level of hands-on skill acquisition, but this would also require technical experts to be mentors in the workplace.

For EsgeeTech, a renewed focus on domestic semiconductor production presents the opportunity to serve the needs of the semiconductor industry through enhancing their plasma expertise. Douglas Breden and Anand Karpatne, our in-house plasma experts who have recently trained employees from Applied Materials and Tokyo Electron, made the point that a semiconductor wafer carries thousands of dollars worth of value in an area smaller than the size of a penny. These wafers are processed by complex machines which employ inductively coupled plasma (ICP) or capacitively coupled plasma (CCP) systems. With steep demand for the reliability of such manufacturing systems, root cause analysis of any potential issue would require an in-depth understanding of plasma physics. At Esgee, we equip our customers with customized training to enhance their understanding of plasma systems. As a result of feedback we have received recently, we are planning to expand our training to non-customers later this year. Breden and Karpatne believe that the fundamentals of plasma physics have become crucial for those seeking an understanding of modern semiconductor manufacturing systems.

However, there remains a lack of industry-specific training surrounding plasma theory and application. On the internet, there is still little discussion on this subject, even though the entire internet age has also been enabled by development provided via plasma manufactured systems. This is a core issue that EsgeeTech aims to resolve. The esoteric tag which has been added to plasma manufacturing is unjustified. Although specialization remains an enabling feature of the global supply chain, concentration of critical processes has also created the risk of systematic bottlenecks in the case of geopolitical conflict, natural disasters, or other issues that could affect specialist regions’ contributions to the global chain. Competition in plasma-specialized processes would reinforce vulnerabilities in the chain while alleviating demand.

Such a solution is in the best interest of semiconductor producers as well as the average electronics consumer, whose cell phone, computer, and other similar gadgets rely on plasma to produce. EsgeeTech stands out as a company with an established background in plasma techniques used by the semiconductor industry. Decades of experience through consultation projects with semiconductor manufacturers have contributed to our understanding of the problems faced within the industry, as well as the potential solutions. Our flagship product, VizGlow, is designed specifically with the semiconductor industry in mind, with the goal of providing an end-to-end software package that enables innovation through robust multiphysics simulations.

VizGlow™ has always been a product developed with a focus on providing and improving its applications for semiconductor engineers. EsgeeTech’s larger clients have engaged with us in verification and validation of experimental data through VizGlow™, with many of these efforts currently available for review in open literature.

 

 

Envisioning Tomorrow

“The number of transistors on a microchip doubles every two years, while the cost of computers is halved” is an observation of technological trends known as “Moore’s Law.” Since the concept was first introduced in 1965 by Gordon Earle Moore, it has become a target for the speed of scaling and miniaturization within the semiconductor industry. The result of this desired level of innovation is that designs made today for tomorrow’s devices are done so with the expectation (but not the guarantee) of maintaining this rate. This creates immense pressure within the industry to revise and refine techniques, and is further compounded by the shortage of workers. At this stage of quantum processing and engineering, attracting qualified workers requires a promotion of the fields that semiconductor engineering intersects, as well as improving resources to lower the industry’s entry barrier.

EsgeeTech is a company that also relies on attracting a qualified workforce with interest in providing expertise to serve the semiconductor industry. Our collective backgrounds in fluid flow, electromagnetics, kinetic modeling, computational sciences, and computer science are examples of how diverse teams enable a company to excel amid an ever changing market. It also underscores the nature of plasma as a multidisciplinary subject, and further highlights the difficulties semiconductor manufacturers face in filling vacancies.

 

 

 

Interested in learning more about plasma flow simulations? Click here to take a look at our previous article. Feel free to follow us on Twitter and LinkedIn for more related news, or reach out to us directly at info@esgeetech.com. This post’s feature image courtesy of Hal Gatewood & Unsplash.

Clearing the Dust with VizGrain

Clearing the Dust with VizGrain

In the fight against airborne particulates, semiconductor manufacturers face the unrelenting threat of contamination within their labs and facilities. Airflow, microfiltration, air ionization, air pressure, humidity controls, polymer toolsets, anterooms / air showers, and cleanroom suits are just a few of the considerations that manufacturers must make, all in the name of isolating lab processes from the outside world. Despite extensive procedures and costly investments, it is nearly impossible to produce an environment completely devoid of pollutants, and the most common of all pollutants is microscopic, fine particles of solid matter known simply as “dust.”

Unlike the dusty plasmas found in the Earth’s mesosphere and among celestial bodies that fascinate astrophysicists, dusty plasmas in labs here on Earth are a constant source of frustration within the semiconductor industry. Their presence within reactors and manufacturing equipment continues to threaten contamination of wafers and other critical components.

Perhaps someday in the future, the presence of dusty plasmas in semiconductor manufacturing facilities will cease to be, either by way of technological innovation or perfection of cleanroom procedure. But until that time, simulations must account for every piece of relevant physics in order to create a realistic model of these environments.

Making the Dust Fly for Semiconductor Manufacturers

 Dust particles are common contaminants of plasma processing discharge chambers used in the semiconductor industry for etching and deposition. These particles can range from several nanometers to several hundred micrometers in size, accrue a relatively large negative charge in the plasma, and consequently are electrostatically trapped in the plasma. Large particles usually accumulate near the sheath edge, while small particles accumulate in the center of the discharge chamber where the electrostatic potential is usually the most positive.

Trajectories of particles (a) with Wafer Bias and (b) without Wafer Bias. Size of particles ranges from 0.15 microns to 0.5 microns. Image Source: Kobayashi et al.

For semiconductor manufacturers, formation of particles within a plasma and the effect of dust particles on a semiconductor’s processing surface are determining factors for overall process quality and yield. Developing macro-particle kinetic models that account for all associated physics (macro-particle growth, charge-up, and transport within a plasma) is a necessity in modern semiconductor processing reactor design.

 As a result, the applications for studies surrounding dusty plasmas focus primarily on particle transport and plasma distribution. This is the case in plasma etching / deposition systems, where particle behavior can be expressed through calculation of measured gas temperature distribution and thermophoretic force. Thermophoretic force can then be controlled using plasma distribution controls, and by changing gas temperature distributions across wafers.

It was only until recently that these breakthroughs in particle control and plasma distribution relied on expensive and elaborate experiments. Now, particle-based simulations through software like VizGrain are able to predict these behaviors while including the core features necessary for creating a computational model:

  1. A multi-subdomain capability, where multiple solids and gas regions can be described simultaneously.
  2. Unstructured meshing for representing complex topologies with fine geometric features.
  3. Modeling of static electric and magnetic fields as well as electromagnetic waves through coupling with an electromagnetics solver.
  4. Treating subsets of the overall gas composition as a continuum through coupling with a classical fluid flow solver and a plasma solver.

VizGrain: A Versatile Computational Tool for Particle Simulations

 A unique aspect of VizGrain is that it allows computational modeling of particle dynamics in a variety of systems, including:

 

  • rarefied gas dynamics
  • gas discharge plasmas
  • macroscopic particle dynamics (e.g., dust particles, droplets, etc.)

VizGrain allows working with atomic-size particles as well as particles with finite macroscopic sizes. The former approach is used to model rarefied gas dynamics and conventional non-equilibrium plasmas, while finite-size macro-particles are considered for models of dusty plasmas, aerosols, and droplets to name a few. In this latter case, there is also consideration for electrical charge-up of particles in a plasma environment. Additionally, these models feature a comprehensive variety of drag forces that can act on both atomic and macro-particles.

 VizGrain solves governing equations that describe the transport (motion and collisions) as well as the generation and destruction of particles in a specified domain. A number of different particle types, both “atomic” and “macro-scale,” can be solved.

Dusty plasma dynamics in a capacitively coupled plasma (CCP) reactor generated using VizGlow™ (fluid) and VizGrain (dust particles).

Electrically neutral species and radicals, as well as electrically charged species like electrons and positive and negative ions are atomic particle types that can be considered simultaneously. Macro-scale particle types, however, include molecular cluster and larger micron-to-millimeter scale dust particles.
All such particles have mass, charge, and size attributes. The mass of atomic particles is immutable, while those of macro-scale particles can change based on governing laws. Similarly, the atomic particle charge is fixed while macro-scale particle charge can change based on charge-up processes.

 

In VizGrain, all the particles in a swarm are classified according to “particle type.”  All particles of a particular type have individual properties such as mass, charge, and size (diameter or cross section). Extensive use of object-oriented programming principles means that implementation is modular, and extending the list of properties is possible whenever necessary.

 

VizGrain also offers flexibility in representing practical applications through complex geometries. Cells can be made from a mesh using triangles and quadrilaterals (for 2D), as well as tetrahedra, hexahedra, prisms, pyramids, or even a mixture of all the aforementioned cell types together.

 

Additionally, meshes can be prepared in a variety of formats that are used commonly by practitioners and imported into VizGrain. The code also outputs the maximum number of particles that exist in a cell over the whole mesh at selected screen output intervals, with warnings for severely skewed cells in the mesh that could portend poor quality solutions (especially in the case of electrostatic potential in PIC simulations). Note that the accuracy of pure particle simulation results are usually insensitive to the quality of the mesh, which has been confirmed in VizGrain simulations.

 

 


Thanks for reading! If you’re still curious about the topics discussed in this article, check out the following journal papers (and ask us for a free copy!):

Levko, Dmitry, et al. “VizGrain: a new computational tool for particle simulations of reactive plasma discharges and rarefied flow physics.” Plasma Sources Science and Technology 30.5 (2021): 055012.

Kobayashi, Hiroyuki, et al. “Investigation of particle reduction and its transport mechanism in UHF-ECR dielectric etching system.” Thin Solid Films 516.11 (2008): 3469-3473.

Merlino, Robert. “Dusty plasmas: from Saturn’s rings to semiconductor processing devices.” Advances in Physics: X 6.1 (2021): 1873859.

Merlino, Robert L., and John A. Goree. “Dusty plasmas in the laboratory, industry, and space.” PHYSICS TODAY. 57.7 (2004): 32-39.

Interested in learning more about plasma flow simulations? Click here to take a look at our previous article. Feel free to follow us on Twitter and LinkedIn for more related news, or reach out to us directly at info@esgeetech.com.

Modern Solutions for Global Semiconductor Manufacturing

Modern Solutions for Global Semiconductor Manufacturing

Recently, consumers have faced rising prices for semiconductor-powered devices, with shortages affecting the availability of products that serve their daily needs. With increased demand for semiconductors in key areas like the automotive industry, healthcare, and within AI-enabled products, manufacturers are vying to remain competitive while making next-generation breakthroughs in order to meet current demand.

So, the question is: how can industrial researchers continue to innovate in order to boost semiconductor production? And how can simulations relieve the pressure for the semiconductor industry to meet ever-growing global demands? 

A joint report authored by Boston Consulting Group (BCG) and The Semiconductor Industry Association (SIA) is aimed at combating semiconductor shortages by profiling risks in the current international supply chain and highlighting semiconductors as a central component of shared economic stability across the globe. Central to the report’s findings are a series of statistics that characterize the current issues the world faces in securing a future where semiconductor demands are met as they continue to grow over the next decade. 
 

The Global Semiconductor Supply Chain at a Glance

The current cooperative structure of the global supply chain for semiconductors is as unique as it is complex, with a web of destinations across the globe from the earliest stages of research and design to the final point of sale.
Since the 1970s, specialization within these national and regional stages has contributed to the chain’s ability to produce at the speed of demand, while also innovating and improving the capabilities of semiconductors faster than any one country or region could.
 
Figure 1 below shows the current global semiconductor market share by region as of 2020. South Korea and the United States account for two-thirds of total market production and sales.

In addition to utilizing the specializations offered by each of the six major regions (Europe, Japan, Mainland China, South Korea, Taiwan, and the United States) that contribute to the global supply chain, the roundabout system also makes use of favorable trade conditions among the participating countries to keep production costs and consumer prices affordable. Figure 2 below shows the usage of semiconductors by industry.

Major risks in disruptions to the current supply chain could lead to a sharp rise in the cost of devices to producers and consumers alike. In a hypothetical situation where the global chain is replaced with self-sufficient regions, the report forecasts up to $900 – 1,225B of upfront investment required to maintain current output and meet rising demands, with an overall cost increase of 35% – 65% for consumers if regional and comparative advantages are ignored.
 
National policies within key regions, most notably Mainland China – which has massive industrial and manufacturing capabilities – have already placed self-sufficiency as a high priority for their future development in semiconductors.
 
Similar policies in other nations could leave local markets open to unforeseen factors, including greater competition for materials and additional costs in their securing and transportation. Situations like natural disasters and geopolitical conflict could destabilize systems that seek to decouple from the international chain, leading to regional shortages of semiconductors and additional issues with production for critical communications and security sectors.
 

Researching and Developing Solutions for the Market

In addition to the current issues that the international supply chain faces, SIA’s report highlights the importance of research and development, which is the primary way that producers maintain state-of-the-art techniques and provide security in their devices.
 
Although the speed of innovation and change in major market devices like consumer electronics is visible from year-to-year, the time for techniques developed at pre-competition research stages to be utilized at a mass scale and included within the global chain can take decades. As a result, original equipment manufacturers (OEMs) and integrated device manufacturers (IDMs) face upfront costs in both R&D and capital expenditure, with years before seeing a return on investment in these areas.
 
Despite the delayed turnaround for companies investing and participating in pre-competitive and basic research, cooperation at these early stages enables chips to become smaller while increasing performance. Recent innovations like 5G, internet of things (IoT), and autonomous vehicles all began their journey to widespread use at this stage. Figure 3 below illustrates regional spending in R&D among key regions as a percentage of sales.

SIA’s report also cites the need for utilization of emergent technologies in alleviating risks and constraints in the global chain, with modern inventions like augmented and virtual reality (AR/VR) playing a crucial role in enabling operations to continue remotely throughout the pandemic.
 
Simulation also provides this principle effect of bridging digital and physical worlds by allowing manufacturers to cut material costs and risk of exposure to hazardous materials, all without sacrificing insights that physical experiments and trials offer.
 

Unique Solutions Require Detailed, High-fidelity Simulations

The use of simulation software and digitally based tools to further minimize risks that current global producers face is both economic and modern, and its viability as an industry-wide solution will only become greater as time continues. Simulations offer additional innovation points through applications for commonly used equipment in the semiconductor industry, such as plasma reactors, with details like simulated angular distribution functions deciding process parameters like excitation frequency and excitation voltage.

Industry leaders like Dr. Peter Ventzek and Dr. Alok Ranjan of Tokyo Electron Ltd. – a global supplier of equipment used to fabricate integrated circuits- have already taken advantage of high-fidelity plasma simulation and processing to develop new techniques with a wide array of applications for the semiconductor industry, using the insights offered by numerical simulations using VizGlow™. Here are a few examples of patented methods and techniques using simulations that are contributing to the semiconductors of today and tomorrow:   

·       Mode-switching plasma systems and methods that allow manufacturers to reduce minimum-required features and the cost of ICs, while also increasing packing density of components. Manufacturers working at the atomic scale are able to continue scaling semiconductor devices with consideration for constraints like equipment configurability, equipment cost, and wafer throughput.

·       Techniques that include formation, patterning, and removal of materials in order to achieve physical and electrical specifications for the current and next generation of semiconductor. Plasma etching and deposition are prone to issues with decoupling source power (SP) and bias power (BP) effects, resulting in reduced control and precision. Decoupling these effects helps reduce cross-talk between a source and bias and in turn enhances control while decreasing complexity.

·       Utilizing pulsed electron beams to create new plasma processing methods, which enable reduction of feature size while maintaining structural integrity. As device structures continue to densify and develop vertically, these methods which produce atomic-level precision in plasma processes will be useful for profile control, particularly in controlling deposition and etching processes at timescales associated with growth of a single monolayer of film.

Processes in plasma-assisted etching or deposition rely on the accurate determination of the distribution of the ion energy and angle close to the substrate surface. Precise control over these parameters could be used to manipulate the bombardment of the process surface. However, from a process engineer’s perspective, the incremental changes in geometric design, voltage, power, feed gas composition, and flow rates must be correlated with IEADF (Ion Energy and Angular Distribution functions).

The engineering team at Tokyo Electron Ltd. uses our non-equilibrium plasma solver, VizGlow™, and particle solver, VizGrain™, to understand underlying physics and find the best operating conditions for Tokyo Electron Ltd. products. In a paper published in the Journal of Physics D: Applied Physics, Dr. Rochan Upadhyay, and Dr. Kenta Suzuki, Esgee Technologies along with researchers at The University of Texas at Austin validated the VizGlow™ simulations used to obtain IEADF in a capacitively coupled plasma reactor.  

 

 

Esgee Technologies uses software products, databases, and consulting projects to solve challenges faced by industrial manufacturers. We are dedicated to the development of plasma and physics simulations for manufacturing applications across a wide range of manufacturing industries, including semiconductors, with a legacy of support for analyzing existing equipment, improving processes, and developing new equipment concepts through the use of our software.

 

 

Thanks for reading! If you’re still curious about the topics discussed in this article, check out the following journal papers (and ask us for a free copy!):

Upadhyay R., K.  Suzuki, L. L. Raja, P.L.G. Ventzek, and A. Ranjan. (2020). Experimentally Validated Computations of Simultaneous Ion and Fast Neutral Energy and Angular Distributions in a Capacitively Coupled Plasma Reactor. Journal of Physics D: Applied Physics. 53. 10.1088/1361-6463/aba068.

 

Interested in learning more about plasma flow simulations? Click here to take a look at our previous article. Feel free to follow us on Twitter and LinkedIn for more related news, or reach out to us directly at info@esgeetech.com.