Using VizSpark to Model Electrical Discharge in Combustion Engines

Using VizSpark to Model Electrical Discharge in Combustion Engines

Argonne National Laboratory represents the United States’ Department of Energy’s commitment to cooperative research and scientific discovery. Since its inception in 1946, Argonne has pioneered laboratory research and experimentation as the first national laboratory in the United States. While a significant amount of research in the decades following its founding centered around nuclear energy and applications, Argonne has transitioned from nuclear research to include additional energy sources and storage since the beginning of the 21st century. Now, Argonne constitutes a scientific community of leading researchers, with projects across a spectrum of computational, quantum, and interdisciplinary fields.

Among the contributors in this area are Dr. Joohan Kim and Dr. Riccardo Scarcelli. Their work on modeling spark discharge processes in spark-ignition (SI) engines was recently recognized by Argonne. Dr. Kim received a Postdoctoral Performance Award in the area of Engineering Research, along with ten other postdoctoral appointees whose contributions set a standard not only for the quality of their discoveries, but also for the ingenuity of their techniques and demonstrated leadership capabilities. According to Argonne, awardees’ works have upheld core values of scientific impact, integrity, respect, safety, and teamwork.

Within the highly competitive automotive industry, the need for innovation through design presents opportunities for new tools and technologies to be utilized. Regulations from governing entities seek to strike a balance between meeting climate goals through greater restrictions on CO2 emissions from automobiles, while relying on the transportation industry and automotives to fuel trade and commerce. With restrictions focused solely on reducing emissions, applications that meet these criteria without sacrificing capabilities stands out for both manufacturers and legislators alike.

Dr. Kim’s work highlights the need for predictive models which can optimize operational parameters for SI systems in order to maximize thermal efficiency gain and lower engine development costs. Creating these predictive models requires advanced simulation software capable of solving and coupling electromagnetic physics and fluid dynamics into a computational framework. When we asked about his use of simulations, Dr. Kim said, “high-fidelity simulations enable us to perform in-depth analysis of the spark-ignition process, including energy transfer, birth of flame kernel, and thermo-chemical properties; these would be difficult to obtain using experimental techniques only.” He went on to add that, “with a fundamental understanding of complex physics, we can develop predictive models that make simulation-based optimization robust and reliable.”

“VizSpark provided a fully-coupled framework between electromagnetic physics and fluid dynamics, and thereby we were able to diagnose the plasma properties occurring within tens of nanoseconds without many assumptions.”

Dr. Kim’s study utilized VizSpark simulations to accurately estimate electrical discharge shape, as well as temperature and pressure of plasma kernels, thus providing a set of robust initial and boundary conditions for studying flame kernel growth under engine-like conditions. He noted “VizSpark provided a fully-coupled framework between electromagnetic physics and fluid dynamics, and thereby we were able to diagnose the plasma properties occurring within tens of nanoseconds without many assumptions.”

VizSpark is a robust, industrial simulation tool for high-fidelity modeling of thermal (arc) plasmas. Additionally, VizSpark is fully parallelized and can be used to perform large, 3D simulations with complex geometries. Its comprehensive solvers and scalability make it ideal for solving real world engineering problems.

Interested in learning more about plasma flow simulations? Click here to take a look at our previous article. Feel free to follow us on Twitter and LinkedIn for more related news, or reach out to us directly at info@esgeetech.com.

Mirroring the World with Digital Twins

Mirroring the World with Digital Twins

Twins in literature and mythology are a shared theme across cultures and ontologies, exploring early concepts like duality, polarity, and unity. However, equal to these themes were explorations of concepts like loss, fratricide, and self-realization through remorse. Indeed, for every Castor and Pollux, there is a Cain and Abel, or a Romulus and Remus. Twins in myth evoke an impressionistic reaction to the triumphs and tragedy that they represent. Efforts of the current decade may tell us which of the two will ultimately characterize the concept of digital twins and their implementation.

Since being coined in 2003 by NASA executive Michael Grieves, the term “digital twin” has become an ambiguous term for the future of simulation and modeling applications. While Grieves’ earliest intention was in improving product life-cycles, the idea of high-fidelity, virtual representations of physical objects seemed like a certain future for computational modeling given technological capabilities and their increasing role in product design and iteration processes.

What was once Grieves’ insight into the future of technological applications has become a catch-all for any number of virtual models for physical entities, as well as the flow of data between them that provides parity. The resulting ambiguity in the phrase is due to its widespread usage across industries and the dynamic nature of evolving methodologies to reach the virtual “mirrored” / “twinned” ideal.

As with any other technology, there are limitations to simulations and computational models that tend to be overshadowed by their perceived benefits and desired insights. In departure from the abstract, requirements and standardizations for what constitutes a digital twin are yet to be seen. What’s more is that the concept of a digital twin is arguably not new at all, but simply an aggregation of techniques and research already in existence.

 

 

 

SPECULUM SPECULORUM

An issue with the popularity of terms like “digital twin” is that they risk becoming a misnomer due to a lack of common development methodology, much like the internet of things (IoT) platforms they rely on which require no internet connection at all. Digital twins face difficulties in procuring enough data from sensors to mirror physical entities, but also procuring and applying the correct data to become accurate representations. For example, a digital twin for a car’s braking system could use data to predict when maintenance will be needed by using predictive models for determining wear on the brake pad. However, even a specific system would rely on numerous external factors like environment, temperature, and lubrication, as well as an IoT platform for sensors that communicate and collect data from connected assets, or parts. The absence of any one of these parameters could result in incomplete or erroneous data that leads to faults in the virtual entity. Identifying missing parameters and diagnosing inconsistencies between physical and virtual entities can make their usage prohibitive in terms of both cost and labor.

The figure below shows hypothetical examples of digital twin implementations for an atomic layer deposition reactor, a complex machine used to deposit thin films onto materials.

 

At its core, digital twins are real-time, virtual representations of physical entities enabled by sensors and data. Twins can take on specific roles depending on the type of problem they solve or the advantages they offer. Adopting the model introduced by Oracle, there are three primary implementations for twins:

 

Virtual Twins

A virtual representation of a physical entity or asset. These contain manually provided data to the virtual twin from the physical counterpart best described as parameters, and requires the ability for the virtual twin to establish a connection in order to retrieve information from the physical environment. The type and number of parameters sent across this connection – as well as their accuracy – are primary attributes in grading and defining the “fidelity” of the virtual entity.

 

Predictive Twins

As the name suggests, this implementation focuses on creating predictive models and is not a static representation of a physical entity, but one based on data gathered from historic states. These twins serve to detect problems that could occur at a future state and proactively protect against them or allow designers the opportunity to diagnose and prevent the problem. Predictive twins are potentially much more simple than other implementations, and can focus on specific parameters like machine data rather than constantly receiving information from sensors and recreating a full virtual environment.

 

Twin Projections

This implementation is also used to create predictive models, but relies heavily on IoT data exchange between individually addressable devices over a common network, rather than sensors or physical environments. Applications or software that generate insights from the IoT platforms generally have access to aggregate data that is used to predict machine states and alleviate workflow issues.

There are a number of issues that each implementation faces. Maintaining connectivity to sensors for data transfer from physical entities, volume of network traffic between devices, and identification of key parameters are make-or-break in implementing successful twins. The yet ununified methods of collecting data further exacerbate the situation, with most vehicles for standardization lying in sharing models and information. 

The issue that results from relying on such collaborations has to do with data ownership; an issue already marred by controversies both moral and legal. Nonetheless, the promises of improvements for behavior, conformity, design, manufacturability, and structure have already attracted major attention from researchers.

 

 

 

BEAUTY IN COMPLEXITY

Given the broad applications and ambitious tech behind the concept, the question of what cannot be digitally twinned is interesting to consider, especially given that a digital twin of Earth is already in production. The answer depends ultimately on what a digital twin’s use-case is, and to what degree it is able to achieve and produce desired results.

Using this as a criteria doesn’t help the already broad definition of what constitutes a digital twin; one could argue that established technologies like Google Maps and Microsoft Flight Simulator are digital twins. While this may detract from its novelty, digital twin as a term also carries an undertone of possibility through connectivity. Excitement surrounding digital twins is heavily tied to the anticipation of a new level of interconnectedness between devices that enables automation and machine learning. This is seen as a new phase for technology – even a new, fourth industrial revolution, commonly referred to as Industry 4.0.

Still, the complexity of digital twins creates a high barrier for production and implementation for many prospective innovators. A general misconception is that digital twin production requires that a company simply hire data scientists and provide them an analytics platform. Domain expertise and product lifecycle management tend to be overlooked as a result.

Configuration of assets on a product also impact design and are subject to changes in scale and capabilities. Divergence from original, pilot assets can create a cascading effect of incorrect or outdated information between iterations or generations of a product. Asset changes are not always anticipated, certain assets outlast others, and asset replacement in cases of failure can mean drastic changes in design. In the case of products that go through several generations or are sold for decades on the market, synchronization of digital twins is the only solution. This could occur as often as changes are made to the product itself.

It can be challenging to coordinate with manufacturing processes and across iterations or versions as a product makes its way to the consumer. One of the primary use-cases for digital twins in manufacturing has to do with shop floor optimization. Similar focuses on improving operations are found for supply chain use-cases seeking to optimize warehouse design. Generally, study and expertise surrounding these kinds of improvements and optimizations falls under maintenance, repair, and operations (MRO).

 

 

 

SIMULATION-BASED DIGITAL TWINS

Computational simulations are a core feature that facilitates the development of digital twins. By combining high-fidelity simulations and fully coupled multiphysics solvers, companies can create models for assets and tweak them using their own data. Simulation insights create robust iteration phases that can cut process and testing costs, ultimately leading to shorter cycle times and greater management of product life cycles. Regardless of the size of a company or the scale of its products, simulations can connect the earliest designs made by research and development teams to final iterations made by manufacturing teams by providing clear, relevant physical and chemical insights.

“Ultimately, an industrial simulation that does not incorporate high-fidelity physics is essentially digital art.”

Given the increasing market focus on visual and virtual utility, impressive graphics could be misleading when it comes to digital twins. Ultimately, an industrial simulation that does not incorporate high-fidelity physics is essentially digital art. Within technical domains, the centermost aspect of a digital twin should be the fidelity with which it can predict not only steady-state processes, but also edge cases where physics is set to be challenging.

Of all the engineering design problems with applications for digital twins, problems experienced within the semiconductor industry are perhaps the most complex. In this industry’s “race to the bottom,” providing high-fidelity models requires the capability to determine the effects of disruptors like chemical impurities – which can threaten the functionality of critical components like wafers – at a margin of one part per trillion (or one nanogram per kilogram). Additional processes like atomic layer deposition are extremely sensitive to local species concentration as well as pressure profiles in the vicinity of the wafer being produced. While these are examples of restrictions based on the difficulty of working at an atomic scale, insight and perspective in the design and manufacturing process for semiconductors represents one of the most rigorous testing grounds for digital twins.

 

 

 

Thanks for reading! If you’re still curious about the topics discussed in this article, check out the following journal papers (and ask us for a free copy!):

Rasheed, Adil, Omer San, and Trond Kvamsdal. “Digital twin: Values, challenges and enablers from a modeling perspective.” Ieee Access 8 (2020): 21980-22012.

 

Rajesh, P. K., et al. “Digital twin of an automotive brake pad for predictive maintenance.” Procedia Computer Science 165 (2019): 18-24.

Interested in learning more about plasma flow simulations? Click here to take a look at our previous article. Feel free to follow us on Twitter and LinkedIn for more related news, or reach out to us directly at info@esgeetech.com.

Plasma Processing with Carbon and Fluorine

Plasma Processing with Carbon and Fluorine

 As the semiconductor industry continues to shrink critical feature sizes and improve device performance, challenges in etch processing are increasing as a result of smaller features being processed with new device structures. Higher density and higher-aspect ratio features are introducing new challenges that require additional innovation in multiple areas of wafer processing. As a result of their complexity, these innovations are increasingly reliant on comprehensive physical, chemical, and computational models of plasma etch processes.

Plasma etching is a critical process used in semiconductor manufacturing for removing materials from unit surfaces and remains the only commercially viable technology for anisotropic removal of materials from surfaces.  Although plasma was introduced into nano-electric fabrication processes in the mid-1980s and transistor size has shrunk by nearly two orders of magnitude, starting at 1.0 μm to ∼0.01 μm today, the progress was mainly driven by trial and error. Unfortunately, detailed mechanisms for plasma etch processes are not well understood yet for a majority of process gasses. Therefore, the development, improvement, and validation of these mechanisms remains a constant endeavor. This would open up more opportunities for innovation in this area.

Every Last, Atomic Detail

The growing costs of etching are threatening to slow the rate of improvement for density and process speed, though manufacturing expenses can be mitigated using simulation tools. Each generation of devices requires more layers, more patterning, and more cycles of patterning that continue to increase overall cost and complexity. Even if component size can be reduced further, this presents manufacturers with additional costs in developing even more precise lithography and etching machines. This highlights the balance between atomic layer processing in high volumes and the need for a renewed approach to miniaturization in order to extend Moore’s Law.

Plasma etching takes place as part of the process of wafer fabrication, which in turn is a main process in the manufacturing procedure for semiconductors. For a wafer to be finalized, cycles must be completed potentially hundreds of times with different chemicals. Each cycle increases the number of layers and features that the desired circuit carries. 

Wafers begin as pure, non-conductive, thin silicon discs generally ~6 to 12 inches in diameter. These wafers are made of crystalline silicon, with extreme attention to chemical purity before oxidation and coating can occur. Oxidation is one of the oldest steps in semiconductor manufacturing, and has been used since the 1950s. Silicon has a great affinity for oxygen, and thus it is absorbed readily and transferred across the oxide. Layers of insulation and conductive materials are then coated onto the wafer before a photoresist – a mask for etching into the oxide – can be applied.  Photoresist turns into soluble material when exposed to ultraviolet light, so that exposed areas can be dissolved using a solvent. The resulting pattern is what gives engineers control at later stages like etching and doping, when devices are formed. Integrated circuit patterns are first mapped onto a glass or quartz plate, with holes and transparencies that allow light to pass through, with multiple plates masking each layer of the circuit. The aforementioned ultraviolet light is then applied to transfer patterns from photoresist material coatings onto the wafer, with the photoresist chemicals also being removed prior to etching. It is at this point that a feed gas stream – a mixture of gasses with a carrier (like nitrogen) and an etchant (or other reactive gas) – is introduced to create chemical reactions that remove materials from the wafer. 

During the etching process, areas left unprotected by the photoresist layer are chemically removed. Etching generally refers to removal of materials, however it requires that photomask layers and underlying materials remain unaffected in the process. In some cases, as with anisotropic etches, materials are removed in specific directions to produce geometric features like sharp edges and flat surfaces, which can also increase etch rates and lower cycle times. Metal deposition and etching includes placing metal links between transistors, and is one of the final steps before a wafer can be completed.

Both physical and chemical attributes are present in the etching process. The active species (atoms, ions, and radicals) are generated in the electron impact dissociation reaction of feed gasses. Feed gas mixtures for plasma etching are usually complex due to the conflicting requirements on etch rate, selectivity to mask and underlayer, and anisotropy. Also, the plasma itself dissociates the feed gas into reactive species which can react with each other in the gas phase and on the surface, leading to a further cascade of species generation in the plasma.

The most common etchant atoms are fluorine (F), chlorine (Cl), bromine (Br), and oxygen (O), which are usually produced by using the mixtures of chemically reactive gasses, such as CF, O, Cl, CCl, HBr, and CHCl. Inductively coupled as well as capacitively coupled plasma reactors (ICP and CCP, respectively) have found the most widespread use in semiconductor manufacturing. ICP sources allow the generation of relatively dense plasmas (∼10¹⁶–10¹⁷ m⁻³) at relatively low gas pressures (1–10 mTorr). With independent wafer biasing, they also allow independent control of the ion flux and ion energy at the wafer surface. This process can be engineered to be chemically selective in order to remove different materials at different rates.

Molecular Design in Mind

One of the most important applications of plasma etching is the selective, anisotropic removal of patterned silicon or polysilicon films. Halogen atom etchants (F, Cl, Br) bearing precursors’ feedstock gasses are almost always used for this purpose. Common feedstock gasses for F atoms are CₓF, SF, and NF. The understanding of physical and chemical processes in reactive plasmas requires reliable elementary finite-rate chemical reaction mechanisms. Tetrafluoromethane (CF) is one of the most frequently used gasses for the generation of F atoms. The admixture of a small percentage of oxygen to a CF plasma dramatically increases the etch rates of silicon surfaces, and can also be used to control the lateral etching of silicon.

Distribution of electron temperatures in an ICP reactor modeled using VizGlow.

Tetrafluoromethane (CF) is an important feedgas for plasma etching of silicon. It is relatively easy to handle, non-corrosive, and has low toxicity. CF₄ has no stable electronic states which means that the electron energy is spent on the generation of chemically active ions and radicals without electronic excitation losses. While tetrafluoromethane plasmas have been studied since the early development of plasma etching processes, the influence of various gas-phase and surface reactions on the densities of active species is still poorly understood.

VizGlow is a full-featured, high-fidelity simulation tool for the modeling of chemically reactive plasmas, which are present in half of the steps undertaken in the semiconductor fabrication process described above. The characteristics of gas species and kinetic modeling of their reactions remain an area with yet unexplored potential for further innovation. Radicals created by plasmas are extremely reactive due to unpaired electrons, which is used by semiconductor engineers to speed up the process and cycle times. The same is true for deposition processes, where radicals prevent damage to the chip as it cools from the >1000 °C temperatures produced within etching equipment. Throughout these processes, defects, impurities, and nonuniformities can be detected and diagnosed with help from simulated models. Simulations using VizGlow can help guide the design iterations to avoid operating conditions that could comprise wafers even after months of processing.

Thanks for reading! If you’re still curious about the topics discussed in this article, check out the following journal papers (and ask us for a free copy!):

Levko, Dmitry, et al. “Computational study of plasma dynamics and reactive chemistry in a low-pressure inductively coupled CF4/O2 plasma.” Journal of Vacuum Science & Technology B, Nanotechnology and Microelectronics: Materials, Processing, Measurement, and Phenomena 39.4 (2021): 042202.

Levko, Dmitry, Chandrasekhar Shukla, and Laxminarayan L. Raja. “Modeling the effect of stochastic heating and surface chemistry in a pure CF4 inductively coupled plasma.” Journal of Vacuum Science & Technology B, Nanotechnology and Microelectronics: Materials, Processing, Measurement, and Phenomena 39.6 (2021): 062204.

Levko, Dmitry, et al. “Plasma kinetics of c-C4F8 inductively coupled plasma revisited.” Journal of Vacuum Science & Technology B, Nanotechnology and Microelectronics: Materials, Processing, Measurement, and Phenomena 40.2 (2022): 022203.

Lee, Chris GN, Keren J. Kanarik, and Richard A. Gottscho. “The grand challenges of plasma etching: a manufacturing perspective.” Journal of Physics D: Applied Physics 47.27 (2014): 273001.

Kanarik, Keren J. “Inside the mysterious world of plasma: A process engineer’s perspective.” Journal of Vacuum Science & Technology A: Vacuum, Surfaces, and Films 38.3 (2020): 031004.

 

Marchack, N., et al. “Plasma processing for advanced microelectronics beyond CMOS.” Journal of Applied Physics 130.8 (2021): 080901.

Interested in learning more about plasma flow simulations? Click here to take a look at our previous article. Feel free to follow us on Twitter and LinkedIn for more related news, or reach out to us directly at info@esgeetech.com. This post’s feature image is by Laura Ockel & Unsplash.

Fulfilling the Need for Plasma Experts

Fulfilling the Need for Plasma Experts

The Fourth State; The First Priority

With renewed investment in the semiconductor industry by both the American government and private investors, opportunities for the next generation of American engineers are set to soar as the United States attempts to reclaim a greater share of global production. Accelerating the increasing demands for semiconductor engineers and technical specialists is a growing shortage of qualified workers. Despite the fact that semiconductors have become a key to supporting global critical infrastructure, injecting billions into the industry alone will not solve the problem. These factors are culminating in a pivotal moment for the trajectory of our shared technological future.

Along with the increase in funding and opportunity is another technical domain that the United States can target with their investments and incentives; plasma. Plasma cleaning, etching, and deposition, which play a heavy role in the early stages of material removal (or addition) from surfaces, have created large intersections between the realms of applied engineering and chemistry. Although plasma techniques are not new to the manufacturing process, expertise in these areas has been outsourced to other stops in the global semiconductor supply chain. Plasma is a key component enabling control and manipulation of physics at the atomic level, at a time where transistors have already pushed into the dimensions of single nanometers. Despite the dearth of expertise surrounding plasma devices and given the ever-increasing expectations from the semiconductor industry, plasma is set to play an even greater role in the semiconductor manufacturing process.

 

 

Clairvoyance in the Semiconductor Industry

In an industry that requires such high-level specialization, partnerships with academia and programs that introduce the field to young talent are an important step in fostering interest in these positions. A 2017 survey of US-based semiconductor manufacturers exposed that despite 75% of companies planning increased spending for L&D, there was a critical lack of resources and infrastructure to offer training. Semiconductor manufacturers may offer an unparalleled level of hands-on skill acquisition, but this would also require technical experts to be mentors in the workplace.

For EsgeeTech, a renewed focus on domestic semiconductor production presents the opportunity to serve the needs of the semiconductor industry through enhancing their plasma expertise. Douglas Breden and Anand Karpatne, our in-house plasma experts who have recently trained employees from Applied Materials and Tokyo Electron, made the point that a semiconductor wafer carries thousands of dollars worth of value in an area smaller than the size of a penny. These wafers are processed by complex machines which employ inductively coupled plasma (ICP) or capacitively coupled plasma (CCP) systems. With steep demand for the reliability of such manufacturing systems, root cause analysis of any potential issue would require an in-depth understanding of plasma physics. At Esgee, we equip our customers with customized training to enhance their understanding of plasma systems. As a result of feedback we have received recently, we are planning to expand our training to non-customers later this year. Breden and Karpatne believe that the fundamentals of plasma physics have become crucial for those seeking an understanding of modern semiconductor manufacturing systems.

However, there remains a lack of industry-specific training surrounding plasma theory and application. On the internet, there is still little discussion on this subject, even though the entire internet age has also been enabled by development provided via plasma manufactured systems. This is a core issue that EsgeeTech aims to resolve. The esoteric tag which has been added to plasma manufacturing is unjustified. Although specialization remains an enabling feature of the global supply chain, concentration of critical processes has also created the risk of systematic bottlenecks in the case of geopolitical conflict, natural disasters, or other issues that could affect specialist regions’ contributions to the global chain. Competition in plasma-specialized processes would reinforce vulnerabilities in the chain while alleviating demand.

Such a solution is in the best interest of semiconductor producers as well as the average electronics consumer, whose cell phone, computer, and other similar gadgets rely on plasma to produce. EsgeeTech stands out as a company with an established background in plasma techniques used by the semiconductor industry. Decades of experience through consultation projects with semiconductor manufacturers have contributed to our understanding of the problems faced within the industry, as well as the potential solutions. Our flagship product, VizGlow, is designed specifically with the semiconductor industry in mind, with the goal of providing an end-to-end software package that enables innovation through robust multiphysics simulations.

VizGlow™ has always been a product developed with a focus on providing and improving its applications for semiconductor engineers. EsgeeTech’s larger clients have engaged with us in verification and validation of experimental data through VizGlow™, with many of these efforts currently available for review in open literature.

 

 

Envisioning Tomorrow

“The number of transistors on a microchip doubles every two years, while the cost of computers is halved” is an observation of technological trends known as “Moore’s Law.” Since the concept was first introduced in 1965 by Gordon Earle Moore, it has become a target for the speed of scaling and miniaturization within the semiconductor industry. The result of this desired level of innovation is that designs made today for tomorrow’s devices are done so with the expectation (but not the guarantee) of maintaining this rate. This creates immense pressure within the industry to revise and refine techniques, and is further compounded by the shortage of workers. At this stage of quantum processing and engineering, attracting qualified workers requires a promotion of the fields that semiconductor engineering intersects, as well as improving resources to lower the industry’s entry barrier.

EsgeeTech is a company that also relies on attracting a qualified workforce with interest in providing expertise to serve the semiconductor industry. Our collective backgrounds in fluid flow, electromagnetics, kinetic modeling, computational sciences, and computer science are examples of how diverse teams enable a company to excel amid an ever changing market. It also underscores the nature of plasma as a multidisciplinary subject, and further highlights the difficulties semiconductor manufacturers face in filling vacancies.

 

 

 

Interested in learning more about plasma flow simulations? Click here to take a look at our previous article. Feel free to follow us on Twitter and LinkedIn for more related news, or reach out to us directly at info@esgeetech.com. This post’s feature image courtesy of Hal Gatewood & Unsplash.

Propelling Space Age with Ion Thrusters

Propelling Space Age with Ion Thrusters

The inquisitive mind of mankind permitted spaceflight to grow in the last century and exponentiate in recent years. Thanks to the technologies that drive this rapid deployment of spacecrafts. Statistics show that the year 2020 had seen a record breaking 114 launches which placed over 1000 satellites in orbit. This number increased further in 2021 –  in a  span of 9 months over 1600 satellites were launched.

Launching satellites to orbit is arduous. Maintaining their orbits throughout their lifetime to accomplish their mission is a tough problem, as well. Gravitational effects combined with possible aero-drag can deorbit satellites.

Small orbital thrusters keep the satellites in their desired track by firing periodically to account for gradual changes in orbit, an effect known as orbital decay. With an ever-growing number of satellites orbiting with tight tolerances, maintaining precise orbits and controlled deorbiting upon reaching end-of-life is also critical.

Development of highly efficient and reliable thrusters is carried out by various agencies around the world. Different types of orbital thrusters exist. They are broadly classified as  chemical thrusters and non-chemical thrusters such as ion thrusters – which is the topic of this blog.

 

Same Requirements, two different approaches:

Thrust can be quantified as the amount of force produced by expelling a certain mass flow rate at some exhaust velocity. This gives flexibility to attain the desired thrust at the expense of a large mass flow rate expelled at low velocity or a small mass flow rate at high velocity.

Chemical thrusters eject byproducts at a large mass flow rate but at a relatively low velocity. Powerful thrusters are fed by large fuel tanks during the initial stages of space flight to reach orbit. These thrusters have low specific impulse.

 

Unlike traditional chemical thrusters, ion thrusters function by Coulombic acceleration ions (typically xenon) in an electrostatic field to high velocity. This results in high specific impulse upwards of 2000 s with electrical efficiencies reaching as high as 80%. Along with fast response times, ion thrusters can be precisely controlled, thus reducing the need for large propellant tanks. Therefore, ion thrusters are highly sought for long-term interplanetary and deep space missions.

 

Iodine thrusters, a possible alternative to more expensive xenon thrusters:

While xenon-based ion thrusters are great alternatives to chemical thrusters, operational cost is high due to the rarity of xenon. Novel propellants such as water, krypton, bismuth, are being researched. Costly experiments are being conducted worldwide to study these novel propellants. In this blog we explore the possibility of iodine as an alternative. Main advantages of using iodine are: larger ionization cross-section compared to xenon and a small volumetric footprint for storing iodine as it is solid under standard temperature and pressure.

 

High fidelity numerical models using VizGlow™:

Prototyping ion thrusters for experimentation and iterative design is time consuming and difficult which are associated with large expenses. Replicating operating conditions of these thrusters in laboratories is difficult. Therefore, researchers often rely on numerical models to optimize their designs. Modelling of ion thrusters is a multiple physics problem which encompasses a broad spectrum of physics. Fluid dynamics, reactive flow, plasma, electromagnetics, surface physics have to be accounted for to model ion thrusters at highest fidelity. Coupling between these individual physics is not straightforward in many cases. As a result, approximations have to be made to simplify the mathematics. Such reduced models may not always represent the behavior of ion thrusters. In fact, the literature pertaining to simplified computational studies of ion thrusters is scarce.

VizGlow™ – a non-equilibrium plasma solver that is fully coupled, self-consistent, and allows accurate numerical simulation of plasma thrusters. VizGlow™ allows coupling of multispecies chemically reactive plasma dynamics with Maxwell’s equation for the electromagnetic waves. Our tool also supports hybrid models that combine traditional Navier-Stokes (CFD) equations and particle-in-cell (PIC). This, for example, can be used to describe the behavior of ejected plasma plume into vacuum.

Our team at Esgee Tech along with collaborators have simulated an ion thruster that utilizes iodine.

 

Simulation setup and discussion:

Here the computational domain consists of a 6 cm x 10 cm plasma chamber within which iodine plasma is generated by a four-turn metal coil antenna that is driven at 13.56 MHz and deposits a power of 100 W. A dielectric quartz layer interfaces between the coil antenna and the plasma chamber. A comparison of the plasma properties at two different pressures, 1.0 Pa and 2.5 Pa is performed in this study.

The simulation has captured non-equilibrium effects such as disparate electron and heavy particle temperatures. We also resolved electrostatic sheaths. The electrostatic Joule heating is negligible compared to the wave power deposited into the plasma. The dielectric quartz wall attains a potential to balance the flux of positive and negative species.

Results from VizGlow™ indicate that for both pressures, almost all of the molecular iodine dissociates into I+ ions which dominate the mixture. The concentration of molecular iodine ion, i.e., I2+ is about one order of magnitude smaller than that of the dominant singly-ionized atomic iodine.

The pressure has an effect on the composition of the generated plasma. The plasma is electropositive with electrons as dominant negatively-charged species for the chamber pressure of 1.0 Pa. This, however, changes with increase in pressure, as plasma formed at 2.5 Pa is electronegative, where atomic iodine anions (I-) are the dominant negatively-charged species. 

An interesting feature that VizGlow™ captured is the presence of vortex-like structures surrounding the plasma bulk. This effect is a result of transport of electron and anion species in the electronegative plasma.

 

Future of iodine thruster modelling:

VizGlow™ is constantly evolving with new features, a growing database of chemistries, fast and accurate solvers. One can perform trade studies to understand the effects of chamber pressure and temperature. Future work in this area could also focus on the reactivity of iodine with the surface of the thruster and spacecraft to estimate the lifetime of the thrusters.

VizGlow™ can simulate actual designs of ion thrusters that are meant to propel next generation efficient spacecraft and expand the horizon of understanding of the cosmos.

 

Further reading:

You are interested in this, aren’t you? Send an email to info@esgeetech.com to receive a copy of our colleague’s work!

 

 

Finding this interesting? Let’s connect!

You have reached the end of this blog, but don’t worry. There are several topics which are of your interest! Email us and follow us at LinkedIn to keep up with our latest posts surrounding numerical plasma physics models.