Modeling and Testing of an Electrically Heated Hotspot

The ignition of flammable atmospheres due to a hot surface can cause industrial accidents. We investigate the constraints necessary to create an electrically heated hotspot capable of igniting n-hexane and air mixtures. The temperature profile of an electrically heated stainless steel disk (Ø 50.8 mm) was studied, with the requirement of reaching ignition temperatures (~1100K) at the hotspot. We utilized Abaqus/CAE to build thermal models of the disk and conduct a design study to determine parameters for the experiment. Based on these results, we developed an experimental setup in SolidWorks, and had the parts machined. We then assembled the experiment and passed various currents through the disk, measured the temperature primarily at the center of the disk over time at these various currents, and compared these results to those found in Abaqus. The results show that it is possible for a stainless steel disk of this size to reach ignition temperatures and cause ignition of a hydrogen-air mixture.

Author: Athena Kolli
California Institute of Technology
Mentors: Joseph Shepherd and Donner Schoeffler
Explosion Dynamics Laboratory, California Institute of Technology
Editor: Stephanie Chen

Introduction

Accidental ignition of a flammable atmosphere due to hot metal surfaces is a significant risk in industrial settings. Ignition in these environments can occur when the flammable gas is exposed to a metal surface heated to sufficiently high temperatures, which we predict to be around 1000 to 1100 K [1]. However, the conditions that lead to ignition are also likely dependent on the geometry of the heated surface, size of the region that reaches ignition temperatures (called the hotspot), the length of time that the hotspot is at ignition temperature, and other factors.

In previous years, the Explosion Dynamics Laboratory (EDL), has explored the conditions necessary for ignition of many different test articles including moving spheres, glow plugs, and horizontal cylinders. The most recent work done in thermal ignition has been in researching ignition conditions of electrically heated vertical stainless steel cylinders of various dimensions. As stated above, it was found that for these vertical cylinders, ignition of n-hexane mixtures occurred between 1000 and 1100 K [1]. However, these results may be characteristic of vertical cylinders and other previously explored geometries, so the next step in this line of research is to choose a new geometry with unique fluid mechanics for the test article, and see under which conditions ignition occurs. We decided to work with hotspots because they produce unique fluid mechanics when compared to all work previously done in the EDL, and also because they are of interest to the aerospace industry. 

The purpose of the present work is to design and fabricate the test article for which future experiments investigating the thermal ignition of a hotspot can be performed. The goal is to develop and understand the dimensional and current requirements of a 50.8 mm diameter stainless steel disk and accompanying assembly capable of reaching ignition temperatures at the hotspot. We were motivated to develop a hotspot of this size because hotspots of this size regime are relevant to the interests of our funding agency, the Boeing Company.

The challenges involved in this project stem from the axisymmetric nature of the hotspot. Because the current is carried through the disk along the radial direction, the resistance and current vary along this path. As a result, elementary analytic heat transfer techniques cannot be used.

The three modes by which heat is transferred are conduction, convection, and radiation. In this project, we account for conduction and radiation, while convective effects are negligible. Heat conduction can be defined as the energy transfer within a single substance, due to a temperature gradient within the substance. In the case of the disk, while current is passed through its center, its outer edge will remain relatively cooler, and the heat will conduct in a radial direction. Convection is the transfer of heat due to the movement of a fluid caused by the tendency for hotter, and therefore less dense, materials to rise. In this case, convective effects are considered negligible because we are dealing with high temperatures, and convection is proportional to the surface temperature T while radiation is proportional to T4 [2]. Radiation is defined as the emission of energy as electromagnetic waves from a hotter body to its cooler surroundings. All objects above absolute zero emit some thermal radiation, but in our work, we focus on the heat lost from the surface of the disk due to radiation. Together, these two modes of heat transfer account for almost all of the energy being exchanged in the system. 

In this project, we heat the disk by passing an electric current through it, causing resistive heating, also known as Joule heating. Joule’s law states that the power produced by Joule heating is directly proportional to the product of the conductor’s resistance and the current squared:

p \propto RI^2

The complication in this case arises when we consider the resistance of the disk, along the direction in which current is flowing. Resistance is defined by the following equation:

R = \frac{\rho l}{A}

Where \rho [Ω-m] is the resistivity, l [m] is the length, and A [m2] is the cross sectional area. As the current travels through the disk in the radial direction, the cross sectional area is changing, and thus these calculations cannot be done easily by hand. Consequently, we utilized a computer aided engineering (CAE) software, called Abaqus, to make these calculations. 

Finally, we conducted a design study for the disk and assembly. In this study, we went through an incremental design process during which we altered the dimensions and materials of parts of the cross section to create a continuously decreasing radial temperature profile.

Methods

Validation of Abaqus/CAE

Abaqus/CAE was chosen to determine the dimensions and current for the experimental set up because this software offers an axisymmetric mode for solving heat transfer problems, allowing us to easily model the disk and make changes to its dimensions in order to achieve the desired hotspot temperature. The axisymmetric mode also allowed for our models to have a high level of detail while keeping the total number of nodes low. Additionally, Abaqus allowed us to use temperature dependent data for values such as thermal and electrical conductivity, and specific heat of the materials. In a quick design study, we looked at the results of the one dimensional resistive heating of a rectangular sheet of stainless steel and found that using temperature dependent data for thermal and electrical conductivity and specific heat results in temperatures as much as 70 K greater than those resulting from using constant data for these values (Figure 1) [3][4]. Thus, this study serves to show that using the axisymmetric mode in Abaqus to choose dimensions and current level for our test article and assembly provides us with quicker and more accurate results. 

Figure 1: Temperature vs. Distance. This graph shows that temperature dependent values result in higher temperature readings at the center of the disk. The temperature dependent data results differ from and are more accurate than the constant data.

Next, we validated Abaqus for the coupled thermal electric mode that we planned to use by testing it against known solutions. We did this by comparing Python scripts with the correct known solutions to a model in Abaqus with identical initial conditions, boundary conditions and material properties.

The first case we considered is a zero dimensional resistive heating case of a rectangular sheet of metal, in which current is applied through the ends, while the sides and top and bottom faces are electrically insulated (Figure 2A). Additionally, the top and bottom faces radiate thermal energy, while the ends and sides are thermally insulated. The Python script uses the energy balance equations for the block and models the change in energy over time by considering the input of energy due to the joule heating and the energy loss due to radiation. The ordinary differential equation solver from the scipy package is the numerical method used to integrate the differential equation for temperature as a function of time.

We then modeled a rectangular metal sheet of the equivalent dimensions and inputted the same material values, boundary conditions, and current in Abaqus. We compared the temperature values over time in the following plot, and saw that the Abaqus model result yields almost exactly the same results as the Python script.

The next case that we verified was the one dimensional resistive heating of a rectangular sheet of metal (Figure 3A). The sheet has a fixed temperature on its ends, is thermally insulated on its sides, and radiates thermal energy from its top and bottom. The current travels through the ends of the sheet, while the sides and top and bottom are electrically insulated. In this case, we must consider conduction along the direction of the current, and thus need to discretize the sheet into rectangular volumes such that temperature is now a function of time and space. 

We then use Fourier’s Law of Conduction and assume that the properties included are all temperature independent. We then integrate the partial differential equation for temperature using the method of lines and the lsoda time integrator with backward-difference solver bdf for stability, and analytically find the steady state solution.

The final case we checked was the one directional resistive heating of a metal disk (Figure 4A). In this case, the analogous Abaqus model was created as an axisymmetric solid, so when working with the model, we only specified conditions on the rectangle outlined in yellow in the below image. The outer radius of the disk was fixed at 300K, while the top and bottom sides of the disk radiate thermal energy. The current travels radially through the disk, while the top and bottom of the disk are electrically insulated. The Python script utilizes similar methods as in the case of one dimensional resistive heating of a rectangular sheet of metal.

In each case, we then plotted the solutions from Abaqus and the Python scripts and showed that they are reasonably close, thus verifying that Abaqus would provide accurate results for coupled thermal electric problems (Figures 2B, 3B, 4B).

Abaqus/CAE Design Study

The first model we analyzed was of a single stainless steel piece with the current passed into the bottom face of the thick stem (Figure 5). We chose to create these models using the axisymmetric mode, as the disk and flange are symmetric around the central axis. There are two problems with this design. Firstly, most of the heat is dissipated in the stem, and secondly, the hottest part of the piece is at the base of the stem. In contrast, we want the hottest part of the piece to be at the center of the disk to create the hotspot.

Figure 5: First Axisymmetric Abaqus model for the temperature distribution of the disk, in Kelvin. NT11 represents the nodal reference temperature value.

We then came up with a design in which a copper rod would grip the thinner stem of the stainless steel piece, in order to combat heat loss up the stem (Figure 6A). This, however, introduced the problem of the heat conducting away from the stainless steel disk where the copper was in contact with the disk. This caused a temperature profile with a peak temperature that was at some radius away from the center, rather than creating a central hotspot (Figure 6B).

To combat this issue, we replaced the top part of the copper piece with stainless steel, and shortened this piece such that it was not in contact with the stainless steel disk, greatly reducing conductive losses without increasing heat dissipation in the stem (Figure 7A). We proceeded with this technique but switched from rods to bars in the computer-aided design (CAD) phase in order to achieve better clamping on the small stainless steel stem, to reduce contact resistance. The critical dimension in this assembly is the cross sectional area of the bars/rods which determines the resistance of the pieces. Because we kept the cross sectional area consistent in the switch from rods to bars, we were able to continue using the axisymmetric mode in Abaqus to model the assembly.

Testing Set Up

The disk and bars that hold the stem are made of 303 stainless steel, and the other bars are made of 101 Copper. The flange is made of 6061 Aluminum, and finally the cable is 4/0 battery wire. Once the parts were assembled, thermocouples were welded to the center of and along the radius of the disk in order to take temperature readings of these spots over time.

Figure 8: All parts designed and machined for this project.

Results

Design choices for the disk thickness and method of passing current through the disk were determined based on models created in Abaqus/CAE, an engineering software with the capability of doing thermal analysis of mechanical components and assemblies. The final disk thickness was chosen to be 0.508 mm, and the disk and assembly are shown below.

The parts were assembled and connected to the power supply, passing currents between 80 A and 120 A through the assembly. To measure the temperature, we spot-welded a thermocouple at the center of the disk, and recorded the temperature data over time. The temperature data at a few other points on the disk were measured using the same method, to get an idea of the temperature profile of the disk (Figure 14). These points were at the center, a 5.5 mm radius, and a 23.5 mm radius at 110 A (Figure 13).

Discussion

After comparing Abaqus models of the assembly with nominal dimensions to the results of the experiment, we see that the experiments yield much higher temperatures than the models at the same currents.

Current InputExperimental Maximum Temperature at CenterPredicted Maximum Temperature at CenterAdditional Amps Required to Achieve Experimental Temperature in Abaqus
80 A777 K588 K28 A
120 A1247 K871 K48 A
Table 1: Table of the results from the highest and lowest currents tested. The experimental temperatures were found to be higher than those predicted by the Abaqus models.

We include the results of the highest and lowest currents tested in the table above. For example, with 80 A, the lowest current that we tested, we recorded a maximum temperature of 777 K after running current for about three minutes, which is 189 K higher than what the Abaqus simulations predicted (Figure 14A). In order to match the experimental temperature that 80 A is producing, we must pass an additional 28 A in the Abaqus model. 

One source of discrepancy in these results was the inconsistency in the disk thickness, as measured by an imperial micrometer (0.0001-0.001 inches). Due to the extremely thin nature of the disk and the necessity for the disk and stem to be machined out of the same piece of material, the resulting disk was of uneven thickness, and was at points significantly less than the desired thickness, ranging from 0.244 mm to 0.518 mm.

Figure 15: Close view of the stem on the bottom of the disk and the crack at the stem’s circumference.

Another likely source for this discrepancy was the very thin crack that was found around the region of the stem at the top of the disk (Figure 15). Both of these inconsistencies with the nominal design likely contributed to the resulting higher temperatures as they caused a decrease in the cross sectional area of the current path, thus increasing resistive heating.

However, in order to prove that ignition due to a disk of about these dimensions in this setup was possible, we cut a piece of 0.305 mm thick stainless steel shim and spot welded it to a steel rod of the same dimensions as the stem in the initial design (Figure 16).

Figure 16: Alternate test article, made of 0.305 mm thick stainless steel shim and a steel rod of the same dimensions as the stem in the initial design.

We were able to achieve ignition of a hydrogen-air mixture (29.6% H2 70.4% air) at 130 A. We modeled this case using the same methods used for the original disk, and Abaqus predicted that the temperature at the hot spot was around 861 K. Although this is a lot lower than expected ignition temperatures, this result supports the data above showing that Abaqus predicted temperatures on the order of a few hundred Kelvin lower than the actual temperature. In this case, the higher temperature in the experiment could be due to radiation from the stem being reabsorbed at the hotspot, resistance from the spot weld that joined the stem to the sheet, or deformation of the sheet increasing the thermal resistance of the test article.

Conclusions

In this work, we designed an electrically heated hot spot and measured the temperature over time at the hot spot and a few other points at various temperatures. With Abaqus, we were able to design a test article that was able to produce ignition temperatures. However, because of the repeated heating and cooling of the test article, the disk experienced deformation and eventually cracked, preventing us from performing an ignition experiment with the original disk. In future work, it will be important to consider the effects of deformation on the disk and the repeatability of the experiments. It may be beneficial to design an easier to manufacture test article so that data can be collected with test articles that have only been used a few times, minimizing the effects of the deformation on the resulting temperature.

Acknowledgments

This research project was made possible by the Caltech SURF Program, Explosion Dynamic Laboratory, and its funding agency, the Boeing Company. I would like to thank my mentor, Dr. Joe Shepherd, for allowing me the opportunity to work in the Explosion Dynamics Laboratory this summer. I would also like to thank my co-mentor Donner Schoeffler for providing me with the knowledge and guidance to be successful in my project. Finally, I would like to thank both Bob and Toni Perpall and the SFP Office for funding and facilitating my SURF this summer, and giving me the opportunity to conduct research as an undergraduate student.

References

  1. J. Melguizo-Gavilanes, L. R. Boeck, R. Mével, and J. E. Shepherd, “Hot surface ignition of stoichiometric hydrogen-air mixtures,” International Journal of Hydrogen Energy, vol. 42, no. 11, pp. 7393–7403, Mar. 2017, doi: 10.1016/j.ijhydene.2016.05.095.
  2. John H. Lienhard IV and John H. Lienhard V, A Heat Transfer Textbook, 5th ed. Cambridge, Massachusetts: Phlogiston Press, 2020.
  3. C. S. Kim, “Thermophysical Properties of Stainless Steels,” Argonne National Laboratory, ANL-75-55, 1975.
  4. C. Y. Ho and T. K. Chu, “Electrical Resistivity and Thermal Conductivity of Nine Selected AISI Stainless Steels,” American Iron and Steel Institute, CINDAS Report 45, 1977.
  5. J. Adler, “Ignition of a combustible stagnant gas layer by a circular hot-spot,” null, vol. 3, no. 2, pp. 359–369, Jun. 1999, doi: 10.1088/1364-7830/3/2/309.
  6. S. Jones and J. Shepherd, “Thermal ignition of n-hexane air mixtures by vertical cylinders,” International Symposium on Hazards, Prevention, and Mitigation of Industrial Hazards, 2020.
  7. “What is a type K Thermocouple?,” https://www.omega.com/en-us/. https://www.omega.com/en-us/resources/k-type-thermocouples (accessed Sep. 16, 2021).

Further Reading

Mars 2020 Sampling and Caching Trending

The Mars 2020 Perseverance Rover aims to explore the surface of Mars to analyze its habitability, seek biosignatures of past life, and obtain and cache rock and regolith samples (the outer rocky material of the bedrock). This article describes tools designed to aid with the processing of data to and from the Rover. The processed data provides a plethora of scientific insight into the Rover’s Sampling and Caching Subsystem (SCS) health and performance. Additionally, these tools allow for the identification of important trends and help to ensure that the commands sent to the Rover have been reviewed, approved, and accounted for. Overall, these tools aid the Mars 2020 mission to seek biosignatures of past life by helping engineers better understand the Rover’s operations as it caches rock and regolith samples on Mars.

Author: Annabel R. Gomez
California Institute of Technology
Mentors: Kyle Kaplan, Julie Townsend
Jet Propulsion Laboratory, California Institute of Technology
Editor: Audrey DeVault

Abstract

This paper details a few of the many Python-based Sampling and Caching (SNC) tools developed to aid in the processing of the uplink and down link data to and from the Mars 2020 Perseverance Rover. To specify, uplink refers to the data sent to the Rover and downlink refers to the data sent back from the Rover. The data processed provides a plethora of scientific insight to ensure the health and performance of the Rover’s Sampling and Caching Subsystem (SCS). One of the problems with dealing with such data, however, is that it can be difficult to immediately identify important trends. Thus, I worked to help develop three separate tool tickets. The first ticket identifies and records each unique Rover motor motion event and its respective characteristics. The second ticket helps to make the process of storing Engineering Health and Accountability logs more efficient. Finally, the third ticket, unlike the first two, sorts through the data that is sent to the Rover instead of the data that is sent from the Rover. More specifically, it helps to ensure that all the commands sent to the Rover have been properly reviewed, approved, and accounted for.

Introduction

The Mars 2020 mission aims to explore the surface of Mars using the Perseverance Rover. In July of 2020, Perseverance was sent to Jezero Crater to analyze its habitability, seek biosignatures of past life, and obtain and cache rock and regolith samples, the outer rocky material of the bedrock. The Rover landed on Mars in February of 2021.

One of the main systems that will carry out these critical tasks is the SCS designed to collect and cache rock core and regolith samples and prepare abraded rock surfaces. To abrade a rock surface simply means to scrape away or remove rock material. The overall SCS is composed of two Robotic Systems that work together to perform the sampling and caching functions. One of the systems is a Robotic Arm and Turret on the outside of the Rover and the second system is an Adaptive Caching Assembly on the inside of Perseverance [1, 2].

Figure 1: The labeled components of the Mars 2020 Rover’s SCS [1].

The SNC team is responsible for performing tactical downlink assessments to monitor the SCS’s health and performance. In addition, they oversee the initiation of strategic analyses and long-term trending to assess the subsystem’s execution across multiple sols, or days on Mars (1 sol is about 24 hours and 39 minutes).

Figure 2: The architecture of the Sampling and Caching System [2].

On any given sol, there is a plethora of data generated by the SCS. This data is then collated into informative reports that are used to complete each sol’s tactical downlink analysis. SNC engineers then assess these reports on downlink and report on the status of the subsystem. New tools have been developed to further analyze and dissect this data, both by identifying motion events and processing data into summary products stored on the cloud and to help ensure that only approved and tested sequences and commands are sent to the Rover. 

Objectives

During downlink from the Rover, it is often difficult to identify unique data trends immediately. Thus, one of the main goals of this research is to establish a sampling-focused trending dashboard infrastructure containing plots of motor data from specific activities over the course of the Mars 2020 mission. This facilitates the identification of trends that can be compared to the on-hand data collected from testbed operations back on Earth and/or create new plots and tables to gain a better understanding of the Rover and its behavior. Identifying such trends will pinpoint potential issues and will allow for changes and/or corrections to the SCS’s operations where necessary.

SNC accomplishes these long-term objectives by creating a trending web dashboard filled with plots and other metrics that can be easily used by others working on the project. To establish a baseline for how the SCS should behave during any given activity, multiple instances of the same activity must be collected and compared over time. Prioritized motions and metrics, identified through coordination with Systems Engineers from      each of the SCS domains, are stored over the course of the mission and populated on the dashboard.

One challenge, however, is stale data, or data that is not bring processed or updated in a timely manner. In a large mission, such a Mars 2020, it is important that data is received and organized quickly so that other teams can complete their respective tasks. Additionally, it is important that long processing times for users are avoided so that data for spacecraft planning can be assessed in a timely manner [2]. This means that data needs to be preprocessed, put in the desired format, and stored so that it can be accessed locally by the dashboard as needed. Once preprocessed, data needs to be updated when new data is downlinked. To combat this challenge, I worked to find a solution that queries and store the data on a regular basis.

The principal tasks of my research include architecting and developing Python-based tools, or algorithms, for the SNC downlink process. This begins with the receipt of data from the vehicle and continues through the tactical downlink analysis, where it is decided if SNC is GO/NO-GO for the sol’s uplink planning, and ends with updates to any strategic products for analyses. More specifically, there are three focal tools needed to make the trending succeed: 1. A tool to identify new data once available. 2. A tool to preprocess and store new data for trending. 3. A tool to update trending dashboard plots with the latest preprocessed data. With these tools in place, displayed trends in the Rover’s operations will help indicate how the SNC operations should proceed for the sol. This paper covers three specific tickets related to this overall development effort.

Ticket 1: Motion Events

Many of the tickets, or projects, that I worked on this summer focus on the second goal mentioned above: develop a tool to process and store new data for trending. This first ticket contributes to a program that further analyzes and dissects Rover motion events to better identify their characteristics, each representing a unique motion request by a single motor. Previously, motion events were recorded during a specified epoch, however, the method for determining that epoch led to inconsistent results. The additions I made to this motion detection program now allow it to detect the start and end time of each motor movement within an epoch and properly record it as a separate motion, as is described in Figure 3. It is important that we collect and filter each starting motion by identifying all the mode Event Verification Records (EVRs), a print-statement style telemetry, that correspond to the start of the motion. Using this tool, once the begin and end times of each motion event are identified, subsequent tools can extract a great deal of mechanism data from the specified time period and generate needed statistics and plots. The main learning curve with this ticket was getting used to the Jupyter Notebook and GitHub environment as well as learning how to program collaboratively to build off and/or debug someone else’s code.

Figure 3: Ticket 1- Motion Events flowchart.

Ticket 2: EHA Logs

The second ticket I worked on adds to and edits a previously existing program designed to store Engineering Health and Accountability (EHA) logs with each downlink pass and store them on the Operational Cloud Storage (OCS) sol by sol. To specify, EHA logs are essentially large CSV files with each column representing a timestamp or a different EHA channel’s value over a given time period. Essentially, EHA is a channelized telemetry collected aboard the Rover and processed on the ground; each channel represents a different type of data and is recorded at a specified rate. Additionally, the OCS is a hosted cloud storage location where operations data is stored and maintained. 

The storage of the EHA logs is useful for JMP-based analyses of flight data and serves as a steppingstone to generating summary CSV files from flight data. JMP is a program with a GUI used for data analysis and visualization; most often used to analyze sampling data. The main challenge with this ticket was that there was too much data to collect and load at once. As a result, the program kept crashing since each query request exceeded Python’s memory limit. To combat this, I developed a function that takes the desired time range of data and breaks it up into shorter time segments so that smaller sections of data are written to the CSV file in the correct order, as is described in Figure 4. In general, preprocessing the data in this form helps visualize the system state and is also helpful for other tools to analyze flight data in a compatible format.

Figure 4: Ticket 2- EHA Logs flowchart.

Ticket 3: Sequence Tracking Tools

The first two tickets are on the downlink side of the SNC operations, processing and analyzing data sent back from the Rover. To gain further experience and knowledge in the many sampling and caching tools the SNC team develops, my next ticket was aids uplink operations: processing data before it is sent to the Rover. When it comes to sending commands to the Rover, there are many sequences written and all their versions must be maintained and kept track of. There are two places where sequences are stored: the uplink store and sequence library. The sequence library is where requests are stored, such as robotic commands for sampling. Once the sequences have been tested and approved, they are sent to the uplink store where they are then delivered to the Rover. Specifically, in this ticket, I helped make progress towards ensuring that all of the commands that are sent to the Rover have been properly reviewed, approved, and accounted for. As seen in Figure 5, there are several steps to complete for this task. To start, I developed a program that obtains a list of the sequences merged into the library along with their version information and specifications. Then, I developed another program that pulls a sequence list down from the uplink store to get a list of what has been sent there. In the upcoming steps (3 and 4), a cross reference will be created to be used as a check to compare the two lists and compare sequences if the versions are the same. This ensures the sequences sent to the rover match those version-controlled in the Sequence Library.

Figure 5: Ticket 3- Sequence Tracking Tools flowchart and step description.

Conclusion

This research helped to establish an efficient sampling-focused trending dashboard infrastructure and improved the ability to identify trends in Rover data from specific activities over the course of the Mars 2020 mission. I developed and improved upon three main Python-based tickets involved in the uplink and downlink of Rover data. The first ticket identifies and records each individual, unique Rover motor motion event and its respective characteristics, such as the start and end times. The second ticket helps to make the process of storing EHA logs more time efficient to better visualize the system state. Finally, the third ticket, unlike the first two, works with the data that is sent to the Rover instead of from the Rover. More specifically, it works to ensure that all of the commands that sent to the Rover have been properly reviewed, approved, and accounted for. Overall, the tickets I worked on will continue to aid the Mars 2020 mission of seeking biosignatures of past life by helping SNC engineers better understand the Rover’s operations as it caches rock and regolith samples on Mars.

References

  1. Anderson, R.C.., et al. “The Sampling and Caching Subsystem (SCS) for the Scientific Exploration of Jezero Crater by the Mars 2020 Perseverance Rover.” Space Science Reviews, Springer Netherlands, 1 Jan. 1970, link.springer.com/article/10.1007/s11214-020-00783-7.
  2. A.C.. Allwood, M.R.. Walter, et al. “Mars 2020 Mission Overview.” Space Science Reviews, Springer Netherlands, 1 Jan. 1970, link.springer.com/article/10.1007/s11214-020-00762-y.
  3. “New Tools to Automatically Generate Derived Products upon Downlink Passes for Mars Science Laboratory Operations.” IEEE Xplore, ieeexplore.ieee.org/document/9172647.

Acknowledgments

I would like to thank my incredible mentors Kyle Kaplan and Julie Townsend as well as Sawyer Brook for providing research guidance, support, and technical assistance; Michael Lashore for taking the time to allow me to shadow his SNC downlink shift; Frank Hartman for finding and funding this opportunity; and Caltech Northern California Associates and Mary and Sam Vodopia for the fellowship to participate in Caltech’s Summer Undergraduate Research Fellowship (SURF).  

Further Reading

The Sampling and Caching Subsystem (SCS) for Scientific Exploration of Jezero Crater by the Mars 2020 Perseverance Rover

Mars 2020 Mission Overview

The research was carried out at the Jet Propulsion Laboratory, California Institute of Technology, and was sponsored by the SURF internship program and the National Aeronautics and Space Administration (80NM0018D0004).

© 2021. California Institute of Technology. Government sponsorship acknowledged.

Robotic Arm Control Through Algorithmic Neural Decoding and Augmented Reality Object Detection

The ability of a robotic arm to be controlled only by human thought is made possible by the use of a brain-machine interface (BMI). This project sought to improve BMI user practicality by revising the Andersen Lab’s BMI robotic arm control system [1, 2]. It aimed to answer two questions: Does augmented reality benefit BMIs? Can BMIs be controlled by decoded verbs from the posterior parietal cortex brain region? This project found that augmented reality significantly improved the functionality of BMIs and that using motor imagery was the most effective way to control BMI motion.

Author: Sydney Hunt
Duke University
Mentors: Richard Andersen, Jorge Gamez de Leon
California Institute of Technology
Editor: Alex Bardon

Introduction

BMIs function by connecting brain activity to a machine through the use of sensors that are surgically implanted in an individual’s brain. When these sensors are connected to a computer, the electrical activity of the brain is measured. This neuronal information is then used to control an external device through a ‘neural decoding task,’ or a machine learning algorithm that programs the external device to perform a specific function when the computer reads a specific electrical activity from the brain.

Neural Sensors

Early versions of intracortical BMIs in humans focused on using signals recorded from arrays implanted in the primary motor cortex (M1) or posterior parietal cortex (PPC) of tetraplegic humans [3]. Data from these studies showed that both the M1 and PPC can encode the body part a person plans to move and a person’s nonmotor intentions [4]. Yet sometimes, it is difficult to predict whether the PPC or M1 will provide stronger signals, as these strengths vary depending on an individual’s brain and the location of the neurally implanted sensors.

In this project, JJ–a Photoshop artist, steak lover, and father of four who became tetraplegic in a flying go-kart accident–had two 4×4 mm array sensors [5] surgically implanted in his brain. The first sensor was implanted in his M1 to detect the neural information that controls his voluntary movement control and execution [3]. The second sensor was implanted in his PPC to detect his planned movement, spatial reasoning, and attention [4, 6].

JJ’s electrical brain activity was measured by utilizing these two sensors in combination with the NeuroPort Biopotential Signal Processing System [7]. This collected neuronal information was later used to control a Kinova JACO robotic arm [8].

Neural Decoding Task for JACO Robotic Arm Control

A limited amount of research has found that thinking about action verbs (e.g., “grasp”) could be decoded from M1 or PPC, in addition to the desired bodily movement [4]. This decoded information could potentially reduce the energy needed to control a BMI system; JJ could simply think of the word “grasp” rather than imagining the grasping action–which includes reaching, grabbing, and picking up–when controlling the JACO Robotic Arm [9]. 

A neural decoding task was therefore developed to translate JJ’s measured electrical brain activity to an action the Kinova JACO robotic arm would perform (e.g. “grasp”). This neural decoding task trained a machine learning algorithm to correctly associate JJ’s electrical brain activity to robotic arm movement (see Figures 1-2). As a result, JJ was able to control the Kinova JACO robotic arm’s trajectory with only his thoughts.

Figure 1: Training the Neural Decoder. The developed neural decoding task consisted of one of five action verbs (“grasp”, “drink”, “drop”, “move”, “rotate”) randomly appearing on a monitor screen, followed by the randomly selected appearance of a red dot or blue square. When a red dot appeared on the monitor, JJ verbally commanded the word that previously appeared (e.g. “grasp”). When a blue square appeared, JJ nonverbally commanded or imagined doing the action of the word that previously appeared. JJ’s electrical brain activity was recorded during this exercise using the NeuroPort Biopotential Signal Processing System, which stored which of JJ’s neuronal signals were active throughout this task. This collected data was later spike sorted and analyzed to measure the accuracy of the machine learning algorithm (see Figure 2).
Figure 2: Data Analysis of Neural Decoder Accuracy (Motor Imagery). A Linear Discriminant Analysis Classification was performed on the neural decoding task for the verbally commanded and imagined verbs (“grasp”, “drink”, “drop”).  “V.Grasp” corresponds to “grasp” being verbally commanded (JJ said the word “grasp” aloud), while “I.Grasp” corresponds to the “grasp” action being imagined (JJ imagined himself grasping an object). This labeling system applies to all action verbs in the figure.

The color of each square in the figure corresponds to the percentage value of the accuracy of the machine learning algorithm between the Predicted Class and True Class. As seen by the blue to pink gradient scale on the right side of each graph, these values can range from 0% accurate (light blue) to 100% accurate (hot pink).

In each graph, the diagonal composed of the six squares from the top left corner to the bottom right corner corresponds to the machine learning algorithm being correct (what the algorithm predicted JJ commanded/imagined matched what JJ commanded/imagined). The left graph represents the results of the neuronal signals collected from JJ’s sensor in his M1 and the right graph represents the results of the neuronal signals collected from JJ’s sensor in his PPC. The algorithm’s overall accuracy is displayed above each graph (36.67% in M1, 55.00% in PPC).
Object Selection via Augmented Reality

The Andersen Lab’s BMI interface was further modernized by introducing augmented reality technology into its system. This incorporation allowed JJ to independently select any object in a 360-degree space using the object detection features of the Microsoft HoloLens2 [10] augmented reality device camera (see Figure 3). BMI practicality was consequently increased; the spatial limitations of object selection when using a BMI were reduced since predefined objects or predefined locations were no longer required in this BMI system.

Figure 3: HoloLens2 View of Unity Application. A screenshot of what JJ saw through the Microsoft HoloLens2 screen when using the revised Andersen Lab BMI. A Unity Cross-Platform Game Engine application was developed and run on the HoloLens2 augmented reality device. It caused a cube game object, similar to what is shown in this photo, to appear on the HoloLens2 screen when the HoloLens2’s camera detected a water bottle. When the eye-tracking feature of the HoloLens2 recognized that JJ was looking at the cube, the cube would rotate, signaling JJ’s object selection. The Kinova JACO robotic arm can also be seen on the left side of the figure. It, along with other objects in the real-world room, is covered in white mesh due to HoloLens2’s spatial mapping feature.

Results

This BMI system allowed JJ to independently select, pick up, move, and set down a water bottle without verbalization (see Figures 4-5). Consequently, this cognitive-based neural prosthesis reduced the social anxiety paralyzed individuals may experience when using voice-controlled prostheses.

The functioning BMI system also demonstrated that augmented reality benefitted the Andersen Lab’s BMI system by reducing its spatial limitations. It allowed JJ to select objects of his choice to be manipulated by an external device, rather than be constrained to using predefined objects and predefined start/end locations.

Data analysis of the neural decoding task showed that motor imagery of action verbs (e.g. JJ imagining himself grasping something) was better represented than both commanding action verbs (e.g. JJ said the word “grasp” aloud) and imagining action verbs (e.g. JJ imagined saying the word “grasp”) in the area of the M1 and PPC where JJ’s particular arrays were implanted (see Figure 2). When comparing the decoding accuracy between the two areas of the brain, the PPC had a higher decoding accuracy of motor imagery than the M1.

Figure 4: BMI Trajectory. A high-level overview of how the BMI system functioned. “Camera” in this figure refers to the camera on the HoloLens2 augmented reality headset. See Figure 5 for more details on how the Kinova JACO Robotic Arm trajectory was determined.
Figure 5: Kinova JACO Arm Trajectory. A high-level overview of the three states the JACO robotic arm went through when executing the BMI system. The states are numbered on the rightmost side of the figure (1,2, and 3). The action the Kinova JACO Robotic Arm executed was determined based on the thumb motion JJ performed: “1.” in each stage corresponds to the action the Kinova JACO Robotic Arm performed if JJ moved his thumb in the upward direction; “2.”  in each stage corresponds to the action the Kinova JACO Robotic Arm performed if JJ moved his thumb in the downward direction.

Future Applications

The Andersen Lab had previously developed a neural decoding task that accurately decoded JJ’s neuron signals when he imagined moving his thumb in the up, down, forward, or backward direction. Due to time constraints, this thumb control was implemented into this project in order to let JJ control the Kinova JACO robotic arm (see Figures 4-5). 

Future work can include implementing the neural decoding task developed in this project into the Andersen Lab’s BMI. Using this strong PPC motor imagery representation of action verbs may make it easier for JJ to navigate the Kinova JACO robotic arm. Rather than meticulously imagining thumb movement, JJ can just think about performing the grasping action and the Kinova JACO robotic arm will execute a hardcoded movement associated with the “grasp” action verb [9].

Additional improvements can also include the implementation of multiple unique game objects appearing on the HoloLens2 screen, providing JJ with visual feedback that multiple real-world objects were being detected. Therefore, JJ could theoretically create a queue of objects to move, which would better represent real-life scenarios. 

Acknowledgments

Thank you to the following individuals and groups for their support. I greatly appreciate your mentorship, guidance, and confidence in me both during and after this project.

Andersen Lab; California Institute of Technology; Caltech WAVE Fellows Program; Friends; Family; JJ; Jorge Gamez de Leon; Richard Andersen; Tianqiao and Chrissy Chen Institute for Neuroscience; Tyson Aflalo.

References

  1. Andersen, R. (2019). The Intention Machine. Scientific American, 320(4), 24–31. https://doi. org/10.1038/scientifcamerican0419-24
  2. Katyal, K. D., Johannes, M. S., Kellis, S., Aflalo, T., Klaes, C., McGee, T. G., Para, M. P., Shi, Y., Lee, B., Pejsa, K., Liu, C., Wester, B. A., Tenore, F., Beaty, J. D., Ravitz, A. D., Andersen, R. A., & McLoughlin, M. P. (2014). A collaborative BCI approach to autonomous control of a prosthetic limb system. 2014 IEEE International Conference on Systems, Man, and Cybernetics (SMC), 1479–1482. https://doi.org/10.1109/SMC.2014.6974124
  3. Hochberg, L. R., Serruya, M. D., Friehs, G. M., Mukand, J. A., Saleh, M., Caplan, A. H., Branner, A., Chen, D., Penn, R. D., & Donoghue, J. P. (2006). Neuronal ensemble control of prosthetic devices by a human with tetraplegia. Nature, 442(7099), 164–171. https://doi.org/10.1038/ nature04970
  4. Andersen, R. A., Aflalo, T., & Kellis, S. (2019). From thought to action: The brain-machine interface in posterior parietal cortex. Proceedings of the National Academy of Sciences, 116(52), 26274–26279. https://doi.org/10.1073/pnas.1902276116
  5. “NeuroPort Array IFU.” NeuroPort Array PN 4382, 4383, 6248, and 6249 Instructions for Use, 29 June 2018, https://blackrockneurotech.com/research/wp-content/ifu/LB-0612_NeuroPort_Array_IFU.pdf. 
  6. Aflalo, T., Kellis, S., Klaes, C., Lee, B., Shi, Y., Pejsa, K., Shanfield, K., Hayes-Jackson, S., Aisen, M., Heck, C., Liu, C., & Andersen, R. (2015). Decoding motor imagery from the posterior parietal cortex of a tetraplegic human. Science, 348(6237), 906–910. https://doi.org/10.1126/ science.aaa5417
  7. Blackrock Microsystems, LLC. NeuroPort Biopotential Signal Processing System User’s Manual, 2018. Accessed on: May 31, 2021. [Online]. Available: https://blackrockneurotech.com/research/wp-content/ifu/LB-0175_NeuroPort_Biopotential_Signal_Processing_System_Users_Manual.pdf 
  8. KINOVA JACO™ Prosthetic robotic arm User Guide, 2018. Accessed on: May 31, 2021. [Online]. Available: https://github.com/Kinovarobotics, https://www.kinovarobotics.com/sites/default/files/PS-PRA-JAC-UG-INT-EN%20201804-1.0%20%28KINOVA%20JACO%E2%84%A2%20Prosthetic%20robotic%20arm%20user%20guide%29_0.pdf 
  9. T. Aflalo, C. Y. Zhang, E. R. Rosario, N. Pouratian, G. A. Orban, & R. A. Andersen (2020). A Shared Neural Substrate for Action Verbs and Observed Actions in Human Posterior Parietal Cortex. Science Advances. https://www.vis.caltech.edu/documents/17984/Science_Advances_ 2020.pdf 
  10. Microsoft HoloLens: https://docs.microsoft.com/es-es/windows/mixed-reality/

Further Reading

Please click the links below or visit the Andersen Lab website for more information about BMIs.

The Intention Machine: A new generation of brain-machine  interfaces can deduce what a person wants

Annual Review of Psychology: Exploring Cognition with Brain–Machine Interfaces

Single-trial decoding of movement intentions using functional ultrasound neuroimaging

Collapse of Fuzzy Dark Matter in Simulations

Dark matter builds the backbone for galaxy formation in modern cosmological simulations. While the current dark matter paradigm is able to capture the large-scale structure of the Universe, it predicts more small-scale structure than is actually observed. This issue may be resolved with the theory of fuzzy dark matter, which is made of ultra-light, wavelike particles. However, the behavior of fuzzy dark matter has yet to be fully understood in present theoretical models. Using 3D numerical simulations, we performed a comprehensive analysis of spherical collapse in fuzzy dark matter halos. Then, we made use of a semi-analytic treatment to predict how likely it is for the halo to collapse, and thus explore small-scale structure formation in the Universe.

Author: Shalini Kurinchi-Vendhan
California Institute of Technology
Mentors: Xiaolong Du, Andrew J. Benson
Carnegie Theoretical Astrophysics Center
Editor: Suchitra Dara

Why study fuzzy dark matter?

Since its discovery by Vera Rubin [1], the theory of dark matter has eluded physicists and astronomers as one of the greatest questions in cosmology: What is the dark matter particle made of? What is its mass? And how does it interact with other matter in order to form the large-scale structure of the Universe?

Modern cosmological simulations work to capture the behavior of dark matter through numerical models. These N-body simulations are extremely powerful tools: they can broadly simulate any dynamical system of particles under the influence of physical forces. In these simulations, dark matter builds the backbone for the formation of galaxies, which are expected to form at the centers of dark matter clumps called halos [2]. Due to the force of gravity, it is thought that these dark matter halos would (1) grow as they pulled surrounding gas into their cores until (2) they collapsed, and (3) stabilized into the first galaxies. Scientists are interested in studying this process of halo collapse.

The most prevalent model in cosmological simulations is the cold dark matter cosmology, where dark matter is made of cold, slow-moving particles. The cold dark matter model has been successful in explaining observations with respect to the large-scale structure formation of the Universe. Despite its overall success, however, the cold dark matter model fails in two small-scale problems. It predicts the existence of more dwarf galaxies than observed in the real Universe, giving rise to the “missing satellites” crisis [3, 4, 5]. Moreover, it leads to discrepancies in the density-profiles of these galaxies which cause their cores to appear cuspy or steep rather than flat. This is known as the “core-cusp” problem [6]. Though these problems are indeed smallscale, they are indicative of some missing component in the cold dark matter model and represent a gap in our current understanding of dark matter.

These tensions lead physicists to hypothesize the existence of fuzzy dark matter. This type of dark matter is composed of ultra-light particles called ‘axions’ [5, 7]. Compared to cold dark matter, these particles behave in a quantum fashion according to Schrödinger’s equation, giving it wavelike properties that manifest on galactic scales. Like the cold dark matter model, it is able to reproduce the large-scale structure of Universe, as desired. Most interestingly, the fuzzy dark matter model suppresses small-scale structure by leading to less halo formation. It thus alleviates the “missing satellites” problem [8, 9]. Moreover, it predicts flat cores in the center of dark matter halos, as indicated by observations. Thus, the fuzzy dark matter model demonstrates several promising characteristics toward understanding dark matter as it behaves in the real Universe.

The above is adapted from Nori & Baldi [10]. It shows a comparison of large-scale structures that resulted from a cosmological simulation, using standard cold dark matter initial conditions (left) and fuzzy dark matter initial conditions, with quantum pressure (right). The high intensity points represent overdensities, where halos are expected to collapse and galaxies eventually form. Although both models produce very similar large-scale structures, as seen in these images, fuzzy dark matter behaves very differently on smaller scales. Due to quantum pressure, the growth of overdensities is suppressed on small scales. This leads to less cosmic structure in the fuzzy case, where there are fewer bright points of overdensity than in the cold case.

Nevertheless, modeling of fuzzy dark matter on subgalactic scales meets is challenging because of the need for sufficiently high resolutions in cosmological simulations [5, 8]. Standard N-body methods are not adequate, as in the cold dark matter case; instead, solving the Schrödinger equation results in a complex wave function which oscillates rapidly in time and has interference in space, requiring high resolutions in both dimensions. Nevertheless, several works have been able to solve the Schrödinger equation and thus test the spherical collapse of fuzzy dark matter halos in their cosmological code (see for example [11]).

One notable area of active research, however, is in modeling the quantum effect which distinguishes fuzzy dark matter from cold dark matter. Arising from the uncertainty principle, quantum pressure in fuzzy dark matter can lead to a minimum size for a collapsing halo. While new studies are being done to understand how quantum pressure affects the collapse of dark matter halos [12], they are often constrained to simple one-dimensional problems in hydrodynamical simulations. This approach does not work well in the case of shell-crossing, when the inner shells of a halo come close to each other and experience repulsion. Solving the three-dimensional Schrödinger-Poisson equation can thus provide more insight to the behavior of fuzzy dark matter halos during their formation.

Purpose of this work

In this way, a more comprehensive analysis of how quantum pressure will affect the collapse of dark matter halos, and thus structure formation in the Universe, is needed. In this work, we use a three-dimensional treatment of fuzzy dark matter to study the effect of quantum pressure in the formation of dark matter halos. Not only would we like to determine how dark matter halos collapse in this model, but also when:

  • How do the dark matter particles evolve? We will look at the evolution of the velocities and densities of dark matter particles over time.
  • When does the dark matter halo collapse? We would like to determine when the fuzzy dark matter halo forms given the initial amplitude and size in the early Universe.
  • How does the halo stabilize? Since the fuzzy dark matter model involves an additional quantum pressure term, this can potentially alter the way in which the halo reaches a state at which it neither expands nor collapses.

We can then predict the halo abundance in the fuzzy dark matter model, particularly on the low-mass end, in a halo mass function (see Press–Schechter theory in [13]). Ultimately, we would like to compare our results with those of the cold dark matter paradigm in order to assess the potential of fuzzy dark matter as an alternative theory for dark matter. This can allow us to have a greater understanding of how dark matter halos may have developed, and thus shaped the eventual formation of galaxies.

Figure 1: Radial evolution of simulated halos over time. Left: a schematic for interpreting the basic “story” of the halo from the evolution curves. Right: shown for both the cold dark matter (gray) and fuzzy dark matter (blue) cases, using different sets of initial conditions; i.e. low vs. high mass (light vs. darker shades) and low vs. high initial amplitudes (thin vs. thick). We additionally indicate the time of collapse (vertical dotted). Due to suppression from quantum pressure, the fuzzy dark matter halos collapse at later times and stabilize at larger radii.
Evolution of both a simulated cold (top) and fuzzy (bottom) dark matter halo over time. The initial overdensity grows before it starts to collapse and form a stable halo. This process is delayed in the fuzzy dark matter case, where quantum pressure suppresses halo formation on small scales.

How to simulate fuzzy dark matter

We ran 3-D numerical simulations to model the evolution of a dark matter halo over time, in both the cold and fuzzy cases. Unlike previous studies, we focused on a single halo so that we could render it with a large number of particles in a small volume to get high resolutions in our results.

Setting initial conditions

The very first step is to set-up appropriate initial conditions for the halo that we are simulating. This means setting its shape and growth at a very early time in the Universe.

Consider a smooth field of particles at the mean density of the Universe. We can perturb the density field at the position of the halo so that it has higher central concentration of dark matter—called the overdensity. The overdensity of a dark matter halo is how dense it is compared to the mean density of the Universe. The initial amplitude of this overdensity is what sets its growth, and causes the halo to collapse. Meanwhile, the size of the halo is determined by the initial mass of the particles inside of the region of the overdensity.

Figure 2: Overdensity of the simulated halos with respect to the distance from the halo centers, at present-day. The profiles are shown for each set of initial conditions, like in Figure 1. While the cold dark matter profiles keep increasing toward the center of the halos, the fuzzy dark matter profiles flatten at lower central overdensities. In this way, the fuzzy halos collapse less.

While we can extract the positions of the particles from their density distribution, we need to use the following continuity equation [14] to get the radial velocities of the particles:

\frac{\partial \: overdensity}{\partial  \: time} = - \nabla \: velocity


Thus, we can calculate the initial position and velocity vectors of the N-body particles in the simulation. In the current iteration of our work, we present four sets of simulations for the cold and fuzzy cases each, specifically for halos with two different masses, and with both low and high initial overdensity amplitudes. The table of initial parameter values is shown below in Table 1.

Halo mass [M⊙]Initial amplitude
1 × 1090.04
1 × 1090.08
2.5 × 1090.01
2.5 × 1090.02
Table 1: Set of initial conditions for running the simulations of cold and fuzzy dark matter halos.

These conditions result in a total of eight simulations to compare.

Figure 3: Ratio of critical overdensities in the fuzzy model to the cold dark matter case, as a function of the halo masses in each simulation set (black circles). Notice that we include an additional “normalization point” (blue dot) at the very high-mass end, where we estimate the ratio to be one. This is because, at very large scales, we expect cold dark matter and fuzzy dark matter to behave very similarly. By fitting a function (blue dashed line) to these critical overdensities, we can predict how likely it is for a fuzzy dark matter halo to collapse.

Running the simulations

We evolve each halo using the cold dark matter paradigm as a basis for comparison. Here, we use the simulation suite called GADGET [15]. This gravitational N-body code is widely-used in cosmological simulations that are based on the cold dark matter paradigm.

Then, we use the same set of initial conditions to run a spectral code for fuzzy dark matter, developed by Du et al. [16]. It numerically solves the three dimensional Schrödinger-Poisson equation for the wavefunction that describes the evolution of fuzzy dark matter.

We run each simulation starting at approximately 10 billion years after the Big Bang (redshift z = 100), to the present-day.

Results of the simulations

Now, we can look at how the different simulated halos evolve in the fuzzy model, versus in the cold dark matter case.

Figure 4: Halo mass function from the cold (gray) and fuzzy (blue) dark matter simulations. This function predicts the number density of halos at different masses. Both models agree at the high-mass, large-scale end. However, the fuzzy model has less small-scale structure than the cold dark matter model.

Basic “story” of the halo

One way to observe the collapsing process is through tracing the radius of the halo, since it provides a sense of how the overdensity grows or becomes smaller over time. This is shown in Figure 1. The typical “story” of the cold dark matter halos can be summarized in three steps:

  1. The halo expands along with the Universe.
  2. Due to gravity, the halo ‘turns around’ at a maximum radius and begins to shrink and collapse.
  3. Eventually, the halo stabilizes at an equilibrium radius.

While the fuzzy dark matter halos roughly follow this path, they do so with several differences. Not only do they collapse at later times, but also stabilize at larger radii than their cold dark matter counterparts. In some cases, the halo expands again. For the high mass-low initial amplitude run, the fuzzy dark matter halo does not even finish collapsing before the present day.

In this way, the collapsing process is delayed and suppressed in the fuzzy model, compared to the cold dark matter case.

Fuzzy dark matter halos “collapse less”

We can also see the suppression of collapse in the density profiles of the fuzzy dark matter halos.

The final overdensity of the different simulated halos are shown in Figure 2, with respect to the distance from the halo centers. While the cold dark matter profiles continue to increase toward the center of the halo, the densities of the fuzzy dark matter halos flatten out. Once again, we see that halos cannot collapse to the same extent in the fuzzy dark matter case as in the cold model.

This suppression is likely due to the additional influence of quantum pressure in fuzzy dark matter, which causes there to be a minimum requirement for halo collapse to occur. But what is that requirement?

What does it take to collapse?

In other words, we would like to know the critical overdensity needed for a fuzzy dark matter halo to collapse.

As demonstrated in Figure 1, we can identify the time of collapse for each of the simulated halos by tracing their radial evolutions. Then, we can calculate the overdensity of each halo at that point in time. Figure 3 compares the critical overdensities from the fuzzy and cold models as a ratio, for each simulation set at different mass halos. At low masses, the fuzzy halos need to attain a much higher density than the cold dark matter halos in order to collapse; meanwhile, the threshold for collapse is similar between both models at the high-mass end.

These results are in line with the prediction that suppression from quantum pressure is greater at smaller scales. However, we can expect fuzzy dark matter and cold dark matter to behave similarly at large scales. But what do these critical overdensities really tell us about small-scale structure formation in the Universe?

Small-scale structure in the fuzzy Universe

To be able to predict the numbers of satellite galaxies in the fuzzy model, compared to the cold dark matter case, we need to determine a halo mass function. This function describes the number density of halos we can
expect at different masses.

Using the relation between the critical overdensity of collapse and halo mass from Figure 3, we can statistically determine the halo mass function. This is accomplished using the semi-analytic model called Galacticus [17]. We show the resulting halo mass functions for the cold and fuzzy dark matter models in Figure 4. Again, both models agree at the high-mass, large-scale end. Meanwhile, whereas the cold dark matter model predicts an abundance of low-mass halos, the fuzzy model has less small-scale structure.

What did we learn?

In this work, we ran 3-D simulations of spherical collapse in the fuzzy and cold dark matter models. With our results, we were able to determine the halo mass
functions, and thus demonstrate the ability of the fuzzy model to cause less small-scale structure formation to occur. We find that fuzzy dark matter is able to suppress halo collapse by:

  1. delaying collapse,
  2. leading to less compact core-formation in terms of larger stabilizing radii and less concentrated central densities,
  3. and requiring a higher critical overdensity for collapse to even occur.

In this way, the fuzzy dark matter model can indeed address the small-scale problems of the prevalent cold dark matter paradigm. In terms of the “missing satellites” crisis, the halo mass function shows that the fuzzy model leads to fewer low-mass galaxies. Moreover, its flatter overdensity profiles can potentially resolve the “core-cusp” problem.

But there is still a lot to explore!

While the overdensity profiles and halo mass function are promising in terms of solving the problems of cold dark matter, it would be insightful to compare our
simulated results with observations of the real Universe. This would allow us to test how well the fuzzy dark matter model can capture galaxy formation, compared to the current paradigm.

Moreover, although we altered the initial density and and mass of the simulated halos in this work, it is also possible to test different masses of the fuzzy dark matter particle. Comparing the results of these simulations with observations might help us constrain the mass of dark matter!

To investigate deeper, we can also monitor the quantum pressure energy that is influencing the collapse of the fuzzy dark matter halos. Moreover, running more simulations at different halo masses can improve the precision of our halo mass function.

Beyond the scope of this work, it would also be exciting to explore what happens to the fuzzy dark matter halos after they collapse. In other words, how does this alternative model affect the actual galaxies that form?


Looking forward, understanding the spherical collapse of different dark matter models is an important step to figuring out galaxy-formation in the Universe.


Acknowledgements

S.K. thanks Xiaolong Du and Andrew Benson for advising and supporting this work. She is also grateful to Gwen Rudie and the Carnegie Astrophysics Summer Student Internship Program (CASSI) for providing this research opportunity. Finally, the author thanks the Caltech SURF program for funding her project.

References

  1. Rubin V. C., 1983, Scientific American, 248, 96
  2. Vogelsberger M., Marinacci F., Torrey P., Puchwein E., 2019, Cosmological Simulations of Galaxy Formation (arXiv:1909.07976)
  3. Klypin A., Kravtsov A. V., Valenzuela O., Prada F., 1999, The Astrophysical Journal, 522, 82–92
  4. Moore B., Ghigna S., Governato F., Lake G., Quinn T., Stadel J., Tozzi P., 1999, The Astrophysical Journal, 524, L19–L22
  5. Hu W., Barkana R., Gruzinov A., 2000, Physical Review Letters, 85, 1158–1161
  6. de Blok W. J. G., 2010, Advances in Astronomy, 2010, 789293
  7. Marsh D. J., 2016, Physics Reports, 643, 1–79
  8. Schive H.-Y., Chiueh T., Broadhurst T., 2014, Nature Physics, 10, 496–499
  9. Marsh D. J. E., Silk J., 2013, Monthly Notices of the Royal Astronomical Society, 437, 2652–2663
  10. Nori M., Baldi M., 2018, Monthly Notices of the Royal Astronomical Society, 478, 3935–3951
  11. Schwabe B., Gosenca M., Behrens C., Niemeyer J. C., Easther R., 2020, Physical Review D, 102
  12. Sreenath V., 2019, Physical Review D, 99
  13. Bond J. R., Cole S., Efstathiou G., Kaiser N., 1991, The Astrophysical Journal, 379, 440
  14. Binney J., Tremaine S., 2008, Galactic Dynamics: Second Edition
  15. Springel V., Pakmor R., Zier O., Reinecke M., 2021, Monthly Notices of the Royal Astronomical Society, 506, 2871–2949
  16. Du X., Schwabe B., Niemeyer J. C., Bürger D., 2018, Physical Review D, 97
  17. Benson A. J., 2012, New Astronomy, 17, 175–197

Further Reading

Are Telescopes the Only Way to Find Dark Matter?

Is Dark Matter Warm, Cold, or ‘Fuzzy’? New Simulations Provide Intriguing Insights.

Axion Cosmology (Review)

Interview with Professor Joseph Falson

Interviewer: Audrey DeVault

Biography

Dr. Joseph Falson joined Caltech’s faculty in 2019 as a visiting associate with the Department of Applied Physics and Materials Science, and became an Assistant Professor of Materials Science in 2020. His research focuses on the synthesis and characterization of quantum materials that display emergent functionalities. Dr. Falson received his B.S from the University of New South Wales (2009), M.S. from Tohoku University (2012), and Ph.D. from the University of Tokyo (2015).

Trivia

  • Favorite food: Very good sushi; I did my PhD in Japan and was exposed to very good sushi by my seniors. I go very seldom, but I appreciate it.
  • Favorite country: Japan. It is one of the most beautiful countries on earth. It’s got everything, it’s really quite an amazing place.
  • Favorite musical genre: I enjoy all types of music, but I have recently been enjoying LA’s KJazz 88.1 radio station. It’s quite famous, and you can stream it from around the world; it’s fantastic!
  • Favorite theory/effect: The Fractional Quantum Hall Effect. It is a very beautiful, complex, layered, and emergent property of condensed matter systems. It deals with the concept of a system as a whole being more than the sum of its parts. The effect only emerges in many-body systems and is very difficult to predict a priori. It is nice to find properties of materials that are difficult to predict theoretically (and be ahead of the theorists). It’s very curiosity driven.
  • A fun fact most people don’t know: Most people don’t know that I’m Australian. I haven’t lived in Australia in about 10 years now, so it’s very rare that people pick up on my accent. Of course, Australians would hear me and know, but otherwise it’s difficult for people to pinpoint.

What type of research does your lab focus on?

Probing the fundamental properties of nature. Specifically, we look at crystalline materials. We grow crystals in “thin film” form, on the scale of 10 or 100 nm thick. When crystals are made very thin, their properties begin to change due to dimensionality effects. In quantum mechanics you may have seen the example of a quantum well, where the energy levels in some spatial direction become quantized. So now you have a combination of very subtle material properties that are built into the crystal structure, and you can perturb it by making the crystals into the shape that you want.

The reason why we try to make really clean crystals is because if you have crystals that are highly defective, and electrons inside them, the electrons will begin to feel the effects of the lattice much more than the effects of the other electrons. So in some situations, the lattice becomes like a secondary property, and we can begin to see the subtleties of how the electrons talk to each other while dressed in some property of the lattice. Usually these properties are very delicate, and only appear in the cleanest materials, often at very low temperatures (about 1 K).

What motivates your research?

We are motivated by the search for new paradigms. Scientists are often asked about the real-world applications of their research, but for us, it isn’t about creating a new material for use in phones or cameras. We are focused on the exploration of fundamental properties of nature.

That’s not to say our research won’t have amazing applications in the future; everything around us started out as fundamental research. Semiconductor science used to be considered uncontrolled and unimportant in comparison to other more well-understood, controllable fields like chemistry and biology. But now I am looking at a billion semiconductors on my screen. Lasers too; when the laser was invented, it wasn’t clear what its applications would be. But now our internet service is based on fiber optic cables across the seas. So just because an application is opaque at the outset doesn’t mean that it’s going to remain that way going forward.

There aren’t that many places where you can pursue pure, fundamental science like this, but Caltech is one of the few places that you can.

How did you get into Materials Science?

100% by chance, I have had a very long and winding road to get here. I was dedicated to doing medicine when I entered university but found very quickly that it wasn’t for me. I graduated with an undergraduate degree in chemistry, but at that point I was already sure that I didn’t want to do traditional wet chemistry. But I didn’t have a great idea of what I wanted to do, I just knew I was interested in electronics and crystal growth.

My 3rd and 4th year of undergrad I explored outside of my chemistry coursework and took some physics and materials science courses and found that I enjoyed semiconductor physics. When I began applying to grad school, I applied for a Japanese government scholarship that I had been introduced to in a humanities course, and won it. So I cold emailed some professors in Japan. I met with a few chemistry professors, and one directed me towards another potential mentor he though I would click with, who studied crystal growth and materials physics.

It was like falling in love in a scientific sense.

I had a very good connection with the mentor he directed me to. So I applied, got the scholarship, got into the school, and then went there for 5 years and got my PhD. I had an absolutely fantastic time. A lot of stars aligned. From there I made another international connection that led to a postdoc position in Germany for four years, and then applied to Caltech, and got the job. I was just driven by my gut feeling.

What I try to tell my students is that in life, the key to success is finding something you are able to do for 10 hours a day and not grow to hate it. If you want to be top-class successful at anything in life, it doesn’t matter how smart you are if you don’t put the time in. Especially in experimental science, so many things go wrong, and you encounter such difficult procedures, so you have to just be tough. I never get up and go to work and want to go home, and I hope that the people around me find that as well.

What have been some of the challenges and rewards of building a new lab? (How has the pandemic affected things?)

Everything has been challenging because it’s been very difficult to get extra manpower, and everything gets delayed. A lot of the equipment is now set up, but I had to do that 100% with my one student. For the vacuum equipment, usually a team would have flown in for two weeks to set it up, so it was quite a monumental task, and it’s taken us about 3 months of work. The good thing that’s come out of that is that now I know that machine like it’s the back of my hand. That’s my baby now, I know all the intricate details. So as hard as it’s been, I’m pretty happy.

What do you think is the most exciting new development in materials science?

I would say the most exciting thing isn’t necessarily a development, but more of a tide change. Early on materials science was very much semi-conductor based, because they were novel. Then the field moved on to studying a select class of very high quality semiconductors very intensely. But now we know nearly everything about them, so recently 2-D materials have been a big thing, and we are entering a stage where there is a strong appetite for a large variety of materials. The field is maturing to be more interdisciplinary, and the tide is shifting towards a search for more exotic, highly crystalline materials. The momentum has grown a lot in recent years, and there are a lot of new computational tools that help predict new materials instead of just doing blind experiments. Theorists are using machine learning and large data sets to make these predications, and it enriches my half of the problem.

What does your typical day look like? What do you do in your spare time?

On a typical day I go to lab at around 8 am, I talk with my grad student and we discuss our goals for the day. Then we try to focus on setting up equipment. I teach twice a week, and spend a lot of time planning and thinking about my lectures. My days are sometimes pretty long and difficult, and physically challenging as well, but it’s nice to see something set up at the end of the day.

In my free time I like to garden. I like it because it takes me 100% away from thinking about anything academic or sophisticated. I also like the idea that it’s like multitasking. When I’m at work, the plants are also hard at work growing.

My wife and I recently bought a house, and there are a lot of things we need to fix up, so at the moment I spend all my weekends sanding or painting. But it’s fun to be free to build stuff how you want it.

What is it like to travel the world as a scientist?

Attending international conferences and talking to people has resulted in a lot of collaboration, friendship, and support. Science is a very natural way of bridging boundaries.

Obviously, it is very different from a vacation as you are traveling to work, but I do think that its incumbent on scientists to enjoy their time while traveling. I think it’s one of the major perks of being a scientist.

I try to travel internationally as much as possible, though I will admit I am enjoying this one year where I don’t have to travel. In the year before I came to Caltech, I would fly from Europe to Caltech for a meeting, and then fly home the next day. There were some hard days and lots of 15 hours flights, so I am enjoying not travelling this year.

What do you love most about Caltech?

I like that science is the number one priority. Caltech places a lot of trust in its students and staff, including young people like me, and I enjoy the independence. The weather, and the campus are also very nice; it’s a beautiful place. There’s something special about rolling up to Caltech and going to the Athenaeum; it’s a very unique experience, very dignified.

I am very passionate about undergraduate mentoring, and I like that undergrads are strongly encouraged to participate in research.

What advice would you give to your younger self/Caltech undergrads?

Reflecting on my undergrad time, I didn’t have a designated advisor. I was very naive and was very lucky that everything worked out. I wish that I had forged a relationship with a mentor when I was younger and had someone to guide me. Luckily, I did have a mentor in graduate school, and that’s when my career really started to take shape. I would recommend you take full advantage of having an advisor. Especially going into your 20s, you think you know what’s going on, but looking back now, I had no idea. You should hunt for a good mentor, it should be someone who can advise you in research and courses, who is also in tune with your personal needs. A good mentor is a good human.

I would suggest you get involved with a lab as heavily as you can, and that you should think about lab work not as something tangential to your studies, but as your life education. It’s what they’re not going to teach you in class. I think you should take the most difficult classes possible in the first three years, research whenever you can, and then do a senior thesis. I personally didn’t do a senior thesis, and I 100% regret it. I have plans for every undergraduate in my lab to do a senior thesis. I strongly encourage it, because that’s what is going to be the most useful when applying to grad school.

Above all, though, you’ve got to have fun and challenge yourself.