Skip to main content
U.S. flag

An official website of the United States government

Official websites use .gov
A .gov website belongs to an official government organization in the United States.

Secure .gov websites use HTTPS
A lock ( ) or https:// means you’ve safely connected to the .gov website. Share sensitive information only on official, secure websites.

PML 2021 SURF Projects

Contact Title Description

Amit Agrawal and Henri Lezec

Modulating the Optical Response of Multi-Resonant Nanostructures using Phase Change Materials

Phase change materials (PCM) have a rich history in computer information storage. Specifically, alloys that come from constituent elements: Ge, Sb, Te. The alloy Ge2Sb2Te5 (GST) can undergo electrical or optical pulse melt-quenching, and switch between amorphous and crystalline states on a nanosecond timescale. These phases also have large differences in refractive index and resistance which make them promising candidates for non-volatile memory or rewritable optical storage. Using this mechanism of active modulation of PCM properties, one can modulate spectral resonances of subwavelength photonic devices and achieve higher optical contrast as it is switched between phases. Achieving a device that can bridge the two ideas of optical and electrical memory storage may be achievable via combination of a PCM and multi-resonant nanophotonic structures. Two device embodiments were studied using the finite-difference-time-domain technique to model their electromagnetic response. The first, metal-insulator-metal nanopillar array that utilizes surface plasmon polaritons (SPPs) to achieve a multi-resonant optical response. The second, a Huygens' metasurface that leverages Bound states in the Continuum to achieve high-quality factor resonant response. Using GST, one can actively modulate the optical responses of these devices by altering the phase of GST which can disrupt the underlying mechanisms governing the spectral resonances, and achieve high contrast switching. By performing full electromagnetic simulations by varying the phases of GST in these two device geometries, their transmission and reflection were collected and analyzed. The result show complete disruption of SPP generation between the GST and metal in the nanopillars device in one of the phases. For the Huygens' metasurface, preliminary simulations suggest that the GST switching will significantly improve the spectral modulation depth of the high-quality factor peaks.

Sujitra Pookpanratana

Custom Calibration and Correction of Photoemission Electron Microscope Images

In 2019, a photoemission electron microscope (PEEM) was delivered and installed on the NIST Gaithersburg campus. The PEEM is a full-field electron microscope that utilizes the photoelectric effect to image a surface. It is a useful tool because it has resolution on the scale of 10 nanometers and can image both the morphology of a surface and its electronic properties. These two imaging techniques can be applied to further understand the electronic traits of materials for use in electronic devices, for example. While the hardware of the PEEM was built commercially, the calibration and processing procedures had to be developed in-house. The real space images produced by the PEEM can, in most cases, be manually corrected with small datasets. These corrections include bright field, dark field and thermal drift corrections. For momentum space images, rotations, pixel calibrations, and energy alignment calibrations must be completed. These operations are difficult to complete manually with existing software. I have developed custom Python scripts to both automate this process and standardize the calibration and correction procedure to streamline data analysis for users of the PEEM. Graphene was utilized as an initial calibration material due to its distinct electronic band structure. The 6 Dirac cones of graphene were used as iso-energy points to align the frames on the energy-axis and a series of matrix operations were utilized to rotate the image in the momentum axis to correct for sample misalignment.

Maritoni Litorja

Assembly of Remote-Controlled Miniature LED Assembly

Currently, most detectors used for biology research that measure light in the femto to nanowatt ranges are not SI traceable. Instead, these detectors use proprietary units called Relative Light Units. Therefore, it is difficult to properly calibrate these detectors since it is unknown what the actual light emission should be. Additionally, the lack of calibrated light sources prevents scientists from measuring the light emission, in SI units, of certain chemical processes, like the production of oxyluciferin, the familiar firefly glow, which is used as a chemical tag for a variety of biological measurements. To calibrate these detectors using SI units, I had to design and build an LED light source that emits a steady amount of light at certain wavelengths. Since the current relates to the light emission of an LED, I have to limit the current to the nanoamp range using a gigaohm resistor. Another important design requirement is remote control, since physically pressing the device on and off could cause slight shifts in its location, changing the amount of light received by the detector. A cost effective way to achieve this is to use the Arduino Wifi REV2 device, since it is easily programmable and can receive commands over Wifi. The final consideration is having an extended light source instead of a discrete source because it makes it easier to precisely measure the light. Organic LEDs (OLEDs) are extended light sources because they are built by placing an organic semiconductor layer between two electrodes. These layers are larger than the semiconductor chips that typical LEDs use. OLEDs have recently become commercially available for display technologies, making them cost effective as well. As such my design for the light source will consist of an Arduino controlling an OLED display that has a nanoamp current running through it.

Jon Geist

Assessing the causes of uncertainties within triaxial accelerometers

Accelerometers are used in a wide array of devices in a wide array of industries. With their ubiquitous nature, it is important to know what will affect the outputs they give. These uncertainties are important to study and understand, with their causes needing to be properly identified. The simulations presented are carried out by rotating the apparatus containing the accelerometers in the local gravitational field from multiple starting orientations. From this, the response of the accelerometers are assumed to take a linear form with an intrinsic set of offsets added to the product of a sensitivity matrix and the acceleration felt by the device. Sample sensitivity matrices and offset triples are then generated and inputted in order to create the full dataset. The assumption from here is that the data will take the form of a summation of a sine and cosine with coefficients along with a constant. The data generated is then fitted in this form in order to find intrinsic properties involving the sensitivity matrix, offsets, and dot products between the axes of maximum sensitivity of the three accelerometers. The uncertainties in the intrinsic properties and sensitivity matrices were not found to be caused by the accelerometers being non-orthogonal. Rather, it was found that the significant uncertainties in the intrinsic properties of the triaxial accelerometer are due to improper installation and construction. This is important because it shows manufacturers and customers what should be focused on the most. While there should still be efforts to keep the non-orthogonality as small as possible, most of the focus should be on the precise installation and construction of the devices.

Andrew Madison

Automation of a hyperspectral confocal laser scanning microscope

Confocal laser scanning microscopy is a mainstay of modern metrology, enabling sensitive measurements of photoluminescent samples with both high specificity and high spatial resolution. Such measurements directly impact ongoing research efforts as diverse as the nanofabrication of quantum emitters and the chemical analysis nanoplastics. In traditional confocal microscopes, optical filters coarsely divide continuous emission spectra into discrete bands that reduce specificity of the measurement and are subject to spectral crosstalk in the presence of heterogeneous emitters. This problem of spectral ambiguity degrades the performance of the microscope, at best, and corrupts the reduction of image data, at worst. To solve this problem, we are integrating and automating a spectrograph into a custom confocal laser scanning microscope to spectrally resolve optical responses of disparate samples including gallium arsenide quantum dots, fluorescent polystyrene nanoparticles, and polymeric nanoplastic arrays. Automation of our new microscope is central to this stage of the project. In addition to developing hardware drivers that control laser scanning, widefield illumination, inspection imaging, and stage positioning, we are developing a graphical user interface that facilitates rapid system configuration and data collection. This presentation will report on progress of the integration and automation of key subsystems of a hyperspectral confocal laser scanning microscope.

Marcelo Davanco

AlGaAs metalens for on-chip waveguide single-photon sources

Electromagnetic radiation from point dipole light sources is difficult to manipulate with conventional freespace optics. Metasurfaces are geometries with subwavelength features that can be engineered to allow free and effective control of the wavefront of light beams, allowing radical manipulation of the flow of light. Epitaxial quantum dots are nanoscale semiconductor heterostructures embedded in semiconductor wafers which act as dipole emitters, producing single photons. Metasurfaces allow for directed confinement of these photons, which may be funneled into an on-chip waveguide with high efficiency.

  • Using a finite difference time-domain simulation software, we have designed a planar metasurface to perform this task. Using a simple equation describing phase shift as a function of position, we have designed a metalens for in-plane phase manipulation to more effectively couple light into a photonic waveguide.
  • After simulation is complete, the metasurface will be constructed on our own device. Our geometry is an alternative to existing methods for collecting photons in epitaxial quantum dots. Our design is non-resonant, which allows efficient collection of emission over a broad wavelength range without the need for spectral tuning the quantum dot. This design also avoids extreme proximity of the quantum dot to etched sidewalls, which causes decoherence of emitted photons.

Robert McMichael

Sequential Bayesian Experiment Design

In this talk we focus on sequential Bayesian experimental design, an adaptive method that makes measurements faster and more efficient. In sequential Bayesian experimental design, we adaptively choose new, optimal, settings for experiments while accounting for measurement noise. From the measurement noise properties, we can find the probability that our possible parameter values are true to our system with the most likely values selected as the "true" values for that iteration. Like curve fitting in excel, we want to find the best model for our data which means we need the best possible parameters (coefficients if it were a polynomial). Our method then estimates the utility (cost) of different measurement settings and picks high-utility settings for the next measurement. Then we make a new measurement with our improved settings which will give us a more informed model using the most probabilistic parameters and give even better settings. The decisions that are made about our settings in each step of this sequential process rely on predicted changes in information entropy, which is not always simple to compute. Currently we use a crude, mostly effective, estimate using the variance of our samples, so we are currently exploring new, computationally efficient, ways to calculate the information entropy. We have some ideas as to why our crude method does not always work, but true entropy will always work, which is why finding efficient ways to calculate entropy is important. However, with the current crude estimate of entropy, sequential Bayesian experiment design works very well, focusing measurements on the most sensitive settings while avoiding uninformative settings. We find that the sequential Bayesian experimental design method converges to accurate parameter values faster than other experimental automation methods. Overall, our software performs better than traditional experiment automation, which means we are that much closer to having computers that not only interpret data, but also guide data collection. Current software at

Elizabeth Scott

Simulating Charged Particle Energy Deposition

Neutron beta decay is the process where a neutron decays into a proton, electron, and antineutrino. The primary goal of the project is to use a cryogenic superconducting detector for energy measurement of charged particles of neutron beta decay. A superconductor below a critical temperature has zero resistance but some finite impedance, which in combination with a resistor allows for the construction of a resonant circuit. As temperature is added to the superconductor due to it being struck by charged particles, the impedance of the superconductor and by extension the circuit is altered. The project measures the change of the circuit's resonant frequency, which is dependent on the circuit's impedance, by passing an AC current through the circuit. Therefore, the energy deposited can be measured by analyzing the resonant frequency change of the circuit. This summer I worked on using Geant4, a Monte Carlo toolkit that allows for simulations of particles in matter, to create an in-depth simulation for energy deposition and thermal transport in the detector. The simulations' goal is to model an incoming electron striking a detector, and the heat propagation from the electron strike. This is possible using the G4CMP Geant4 extension package which allows modeling of phonon physics within silicon. Phonons are a measurement of the heat, or kinetic energy, in a crystal lattice. By having the electrons hit a crystal lattice of silicon we could see how the heat was being dispersed through the detector by tracking the phonons that were created, which allows a better fine tuning of the detector design.

Joseph Tan

Charged particle dynamics in a 0.7 T Unitary Penning Trap / Electron Beam Ion Trap

Ion trapping has many applications, ranging from precision measurements of fundamental constants to the creation of atomic clocks to measure time. Traps requiring strong static magnetic fields like Penning traps and electron beam ion traps (EBITs) are often expensive to maintain since they require a constant power supply and cooling to sustain their powerful electromagnets. As an alternative to traps that require electromagnets, we studied compact permanent magnet ion traps for the production and capture of highly charged ions. The two systems we simulated were a cylindrical Penning trap and an electron beam ion trap, each using a similar configuration of electrodes and Halbach arrays of permanent magnets to create a magnetic field of ~0.75 T at the trap center, with adjustable electrical potential well depth. A Penning trap is an ion trap designed to confine a charged particle in a small volume using a magnetic field and electrostatic potential well to make it oscillate in a controlled fashion. An EBIT is similar but includes an intense electron beam aligned with the trap axis and square-well electrostatic potential, enabling it to further ionize atoms in the trap as well as provide tighter radial confinement. In this work, through simulations using the Lorentz software, we found that the new 0.75 T Penning trap would readily confine Pr X (9 times ionized), Kr XVII, Ar XIV, and bare Ne at an initial energy of 5 eV; moreover, the motion of these ions trapped at much lower energies (less than 0.5 eV) agreed closely with the theory for idealized Penning traps with hyperbolic electrodes. We analyzed these deviations from theory to evaluate the effects caused by ions not being tightly confined axially to the trap center. For the electron beam ion trap, we created a program to quantify the current density and energy of the beam as a function of position in the trap. Although the electrode potentials can be readily optimized for an electron beam that neglects self-repulsion, we are working to determine the maximum beam current that the mini-EBIT can support at various energies if Coulomb interaction (space charge) in a realistic electron beam is accounted for.

Michael Gaitan

Calibration of Three-Axis Accelerometer and Gyroscope Using Pendulum-Based Excitation

Accelerometers are devices capable of sensing acceleration. They are used in machines such as vehicles, cellular phones, and video game sensors. They are especially important in vehicles when GPS services are not available. We developed a method to calibrate a 3-axis accelerometer and gyroscope under dynamic conditions, using pendulum excitation. The device under testing is a battery-operated Arduino microelectromechanical system (MEMS) capable of transmitting its readings to a computer via Bluetooth. Our intrinsic properties model is based on the assumption that the device may have some intrinsic offset. We account for this by first calibrating the device in static conditions. To do so, a cube was built to measure the accelerational intrinsic sensitivity and offset along for each of the three accelerometers. These measured sensitivity and offset values that are then used in the rotational calibration. A pendulum was built to excite the accelerometers and gyroscopes to calibrate the rotational sensitivity of the gyroscope under dynamic conditions. A calibration analysis was then performed by fitting the measured acceleration and rotation data to an equation for pendulum motion. Uncertainty was determined from the covariance of the parameters in the fit. We compare the results of our calibration to the manufacturer’s advertised sensitivities and uncertainties, and assess the effectiveness of our pendulum-based calibration method.

Gregory Cooksey

Automated Collection and Control of Flow Cytometry on a Microchip

Flow cytometry has become an important tool in various fields, due to its capacity to process and measure multiple simultaneous optical signals from thousands of individual objects (typically cells) per second. An instrument collects data for each object that indicates parameters such as cell size, granularity, and various aspects of cell type or activity (as indicated by emission of fluorescent biomarkers that indicate the abundance of target molecules).  For commercial cytometers, automated analysis is standard, although the signals are simplified to scalars such as total integrated intensity.  These data, along with limited information about system configuration, make it very challenging to find rare events such as circulating tumor cells in a sample. We are developing a system that improves counting accuracy and signal uncertainty using precise control of flow and modulation and collection of light through multiple optical paths.  Our system requires integration and automation of additional components, which is being built through a central MATLAB graphical user interface.  First, there are signal generators that modulate several lasers, each of which excites cells in different parts of the system.  Light from each region is then guided to a power supplied photodetector where it is converted to an electrical signal that is then amplified and digitized.  My project has specifically focused on control and readout from each component, such that we can reach the project goal of continuous collection and processing of 16-bit digitization at gigabyte per second in real time. Commands for each device were adapted to interface with MATLAB using serial object commands and memory mapping from C to MATLAB.  This project will streamline the tasks of data collection and processing, and allow better statistics for the detection and characterization of rare objects in flow cytometry.

Richard Steiner

Characterization of Fluke 8588 Reference Multimeter to Digitize and Analyze Power Data

The Fluke meter 8588A is a relatively new Reference Multimeter claimed to be very stable in comparison to other digitizing multimeters and is advertised as being designed for calibration and use in metrology laboratories. The meter released in early 2019 features multiple measurement functions, of which 2 will be investigated in this project, the digitized DC current (DCI) and digitized DC voltage (DCV) functions. The focus of this project is to test the Fluke meter under different conditions in order to find corrections that can be used at high sampling rates in the DCI and DCV modes. Especially of interest for the DCI and DCV functions is how their accuracy is affected by varying other parameters such as the signal frequency, number of periods, and pts/period. The nonlinear behavior of the results will be adjusted with correctional factors to make the measurements more linear. The long-term goal of the project is to use the Fluke Reference Multimeter to digitize and analyze AC power data. Additionally, similar characterization was done on a simpler sensor plus digitizing system for NetZero House power data.

Scott Diddams

Modeling mode shifts in astro-etalons for high precision spectrographic calibration

Fabry-Pérot (FP) etalons, optical cavities formed by two parallel mirrors, are used for spectrographic calibration in a wide variety of applications. An FP etalon takes broadband, continuous input light and outputs a spectrum of discrete wavelengths called modes, and this well-defined, comb-like spectral output makes etalons well suited for application in radial velocity (RV) exoplanet detection. However, RV measurements can require fractional precision on the order of 10^-9, making careful study of etalon systems and their mode stability critical. Recent studies of several etalon systems indicate that over time, an etalon’s modes experience a complex chromatic-dependent drift, and understanding the mechanism behind this drift is necessary to design etalons that can achieve higher RV precision. In this work, we employ Fresnel analysis and the transfer matrix method to model the phase shift of light in the etalon system to study possible sources of this behavior, including changes to the mirror coatings caused by small temperature changes (~1 mK) in the system and small variations in incident angle of light on the cavity. We study the individual effect of each parameter change on the spectrum and model several combinations of parameter changes meant to test possible physical explanations for the behavior. While parameter variation led to Doppler shifts showing qualitative agreement with earlier measurements of mode shift, exact agreement and conclusive predictions are still not realized. Overall, these results do not therefore provide conclusive evidence of the mechanism behind the chromatic dependence of the mode shifts, but they do both exclude some simple explanations for the measured behavior and, more importantly, provide insight into the sensitivity and stability of FP etalons, which is crucial to further understanding and stabilizing etalon systems for radial velocity measurements.

John D. Wright and Aaron N. Johnson

Simulating the Simulator – A Computational Fluid Dynamic Analysis of NIST’s Stack Simulator

NIST's Scale-Model Smokestack Simulator (SMSS) is a unique research facility designed to improve the flow measurement accuracy of sensors and methods used to quantify greenhouse gases and other pollutants from smokestacks. With the imminent closure of the SMSS and steady advances of commercial Computational Fluid Dynamics (CFD) programs, this study investigated the feasibility of replacing the SMSS with a virtual smokestack simulator. (CFD programs numerically solve the governing equations of fluid motion – the Navier-Stokes equations – in complex geometries too difficult to solve analytically.) If successful, a CFD-based virtual stack simulator would allow us to inexpensively test various probe and smokestack configurations. We used CFD to calculate velocity profiles in the SMSS and the outputs of multi-path ultrasonic flow meters and compared them to experimental results from the real SMSS. We numerically simulated the turbulent flow in the SMSS facility using the Reynolds Averaged Navier Stokes and the k-omega turbulence model, which we solved using the commercially available COMSOL Multiphysics software. The model produced 3D velocity profiles in the straight "reference" section of the SMSS as well as the much more challenging "test" section that is downstream of a sharp elbow where large swirl is known to exist in the real facility. Pitch and yaw angles of the swirling flow in the test section were severely underestimated by the model, probably due to artificial diffusion applied in commercial programs to promote convergence of the numerical simulations. Simulated outputs of multi-path ultrasonic flow meters installed in the reference and test sections were compared to actual measurements made in the SMSS. Results suggest that this CFD model is unable to replace the measurements taken by physical sensors but may still be useful in determining the qualitative differences between various possible configurations of smokestacks and ultrasonic meters.

Allen Goldstein

Calibration of Smart Grid Simulation Response to Oscillatory Waveforms

The NIST Smart Grid program aims to advance measurement science that will improve grid efficiency and reliability while enabling greater use of renewable energy sources in the grid. These goals are achieved through research, standardization and testing which require the development of Power Hardware-In-the-Loop (PHIL) capability with a connected Grid Emulation model, Real Time Simulator (RTS), Amplifier, and Battery Simulator to mimic the behavior of electric network with connected renewal energy sources. PHIL systems need to be calibrated and periodically validated in order to ensure fidelity and accuracy of the results they produce. This project develops a calibration tool for the PHIL capability that is particularly suited to generate PHIL test signals containing multiple frequency components and provide users the ability to perform closed loop stability tests. Further, the calibration tool will use a state space representation for test signals that would permit external modulation of the test signal while also confining the evolution of the test signals to known manifolds within the state space representation. This new technique will be implemented using a Real Time (RT) processor, Field Programmable Gate Array (FPGA) based signal conditioning, laboratory grade digital to analog (DAC) and analog to digital converter (ADC). All these components have to be evaluated and characterized thoroughly in to prove this approach feasible. If shown to meet the desired accuracy and performance requirements, this system will be integrated into the NIST Smart Grid Testbed facility and other PHIL test system used by the Department of Energy to validate grid stability in response to renewable generation.

Michael Zwolak

Using Machine Learning to Characterize DNA Structures

Modern advances in machine learning allow for the rapid automation of classification (unsupervised) and prediction (supervised) procedures, which has the potential to expedite biophysical and biomolecular research considerably. The classification of biomolecules based on distinct physical parameters has a particular application in studying proteins for drug development. Here, we use a Deep Neural Network to classify DNA structures in a binary model. Using the radius of gyration of simulated DNA strands, we train Machine Learners to classify the strands as either folded or unfolded. This work serves as a baseline that can be expanded to characterize biomolecules into more classes using different physical properties.

Akobuije Chijioke

Resonator modeling for novel sound calibration technique

Calibration is an important task that must be done for sound recording instruments (most commonly laboratory standard microphones) at the National Institute of Standards and Technology (NIST). The need to improve the calibration techniques has led to the discovery of using refractive index as a novel technique for sound calibration. The use of a cylindrical design for a resonator would be ideal to accurately calibrate these sound recording instruments. The resonator will consist of a stainless-steel cylinder that will hold a sound generator (piezoelectric) at one end and the sound recording instrument on the other end. On the sides there will be two two-way mirrors that will reflect a beam inside the acoustic field of the cylinder. The presence of the acoustic field will change the refractive index of the air inside the cylinder and will affect the laser beam. The laser beam will indicate how much the acoustic pressure has changed inside the cylinder; giving the necessary information for the engineer to calibrate the sound recording instrument. By using the COMSOL Multiphysics modeling tool, a basic model of the resonator was designed and studied. Based on the different eigenfrequencies studied, a sound field was modeled to observe the behavior of the resonator itself. As the project advances, a different acoustic cavity might prove more useful for different eigenfrequencies and/or different sound measurement instruments.

Gregory Cooksey

Graphical User Interface for Analysis of Data from a Serial Microfluidic Cytometer

High throughput measurement of cells is crucial to many key applications in medicine, like drug discovery, cancer screening, and therapeutic monitoring. Such measurements are commonly done using flow cytometers – measurement instruments which have throughputs on the order of tens of thousands of cells per second. In a flow cytometer, cells tagged with fluorescent biomarkers move along a microfluidic channel where they pass through a laser. Emitted light is captured by optical waveguides and recorded by photodetectors. However, each cell is only measured once, and the recorded values are reduced to scalars. This limits the ability to characterize biomarker distributions with high precision, reducing the capacity to discriminate small changes within a population and to distinguish rare objects. To characterize and improve measurement uncertainty, we have developed a novel cytometer which contains multiple measurement regions. Measurements are repeated four times along the flow path and are recorded as high-resolution intensity-over-time signals. This data format necessitates novel analysis software that can process the unique features of our cytometer, like visualization and analysis of the intensity profile and measurement precision for each object. A data analysis graphical user interface (GUI) was developed in MATLAB with those features. This GUI was also utilized to set up comparisons between traditional flow cytometers and our novel device. Serial cytometer data sets were first converted to be interoperable with current data standards for cytometry, and traditional comparison metrics like cytometer sensitivity were built into the GUI. Overall, by reducing measurement uncertainty, the serial cytometer holds promise for improving decision making in fields of medicine where cytometers are used.

Michael P. Zwolak

Employing Machine Learning to Classify Bimolecular Translocation Events

The ionic current flowing through nanopores enables measuring and distinguishing biomolecules as they translocate through the pore. The change in ionic current as the molecule enters the pore is one characteristic feature that reflects molecule size and type. The duration of the event and waiting time between events also provide characteristics features. The distribution of all these features provides the means to statistically distinguish different molecular events, but it is not the only information present in ionic current time traces. We employ machine learning to classify bimolecular translocation events within experimental data on DNA translocation, as well as synthetic data, to determine what features enable accurate classification.

Meghan Shilling

Simulating the Effects of XCT System Geometry Errors

X-Ray Computed Tomography (XCT) is a nondestructive imaging technique used to obtain information about a workpiece’s 3-D geometry and material properties. XCT systems work by generating a beam of x-rays that propagate through the workpiece and are then measured by a detector to generate a radiograph. As the workpiece rotates, many radiographs are generated. Through a computationally intense reconstruction process, the radiographs are used to generate a 3-D voxel model of the workpiece. Demand for XCT for dimensional metrology has increased along with the advent of processes such as additive manufacturing which yield parts with more complex internal geometries. Using the free, open-source ASTRA toolbox for MATLAB, we are developing simulations to help quantify the effects of uncorrected instrument geometry errors found in CT systems by looking at the influence the errors have on sphere form and sphere center-to-center measurements. We hope to use this tool to further validate the single-point ray tracing method developed at NIST. ASTRA’s modular design allows any step and variable in the simulation to be changed. Using this feature, we plan to implement error in the forward projection, such as a detector shift, and generate a reconstruction. The reconstructed model will then show the error as a sphere form or location error. The results of our work thus far have yielded a code capable of generating sphere artifacts and reconstructions based on a specified set of input variables that represent XCT parameters. This code will not only serve as a validation tool for the ray-tracing method, but also as a framework and tool for the validity of future simulations. This work will help support the ongoing standards development efforts of ISO 10360-11 and future revisions of the ASME B89-4-23, which are both standards for the performance evaluation of XCT systems.

Vladimir Aksyuk, Alexander Yulaev, Chad Ropp

Optimal Design of Fiber to Waveguide Facet Coupler Geometries Using the Finite Element Method

Photonic Integrated Circuits (PIC’s) are currently under development for applications in atomic physics where optical interrogation is required. In such experiments, matter may be subjected to optical frequency electromagnetic radiation in order to perform various measurements. Lasers can be used to generate radiation of sufficient intensity and concentration, but to precisely direct and manipulate the light optical fiber and a PIC are used. The light is guided by optical fiber to the PIC as a fiber mode and requires mode conversion to efficiently excite the desired waveguide mode. To improve coupling of the fiber mode to a waveguide mode, a facet coupler is required. Such a device consists of a tapered waveguide connecting two ports, and bidirectionally converts from one mode to another. In order to investigate more efficient methods of optical coupling, a working model of a rectangular waveguide was developed to compare against experimental results, then a series of simulations and analyses in COMSOL Multiphysics and MATLAB were performed to investigate the effect of various geometric deformations. It was found that the laboratory measurements of 3 dB for the transmission losses of the 150 x 100 nm rectangular waveguide were reasonable as the simulation yielded a value of 1.25 dB for the same size waveguide, showing room for improvement in the laboratory. To improve transmission, a model of a tapered facet coupler is under development. Assuming that a long enough taper can operate adiabatically, several geometries are being considered to determine the optimal shape to maximize transmission. The tapered waveguide geometry shows promising efficiency, and further developments are expected to be realized through the implementation of deformation-based optimization algorithms as well as continued iterative design. Studies investigating the impact of mesh element size, cladding thickness, system size, and boundary conditions were also performed, and a procedure for the modeling of waveguide structures was developed. This work serves as a starting point for further development of optimal facet coupler geometries and could be readily adapted for the development of other waveguide geometries.

Gillian Nave

Branching Fractions and Oscillator Strengths in Singly Ionized Chromium

Spectral lines of singly ionized Chromium (Cr II) are present in the spectra of many astrophysical objects, including stars, the interstellar medium, and nebulae. However, interpretation of these spectra is hampered by a lack of atomic data for these lines, including atomic transition probabilities. Such transition probabilities can be measured by combining a measurement of the relative intensities of all spectral lines from a particular upper level with the lifetime of that level. While the NIST Atomic Spectra Database contains over 4000 spectral lines for Cr II below 350 nm, fewer than 300 transition probabilities for these lines have been measured. In order to catalog the transition probabilities of Cr II, archived spectrographic data recorded by a NIST FT700 vacuum ultraviolet Fourier transform spectrometer from chromium and argon hollow cathode lamps were analyzed. Using a specialized program to plot and analyze atomic spectrographic data, gaussian profiles were fit to spectral lines at known transition frequencies. An intensity correction of the sampled lines was then applied by dividing the fitted lines by the response curve yielded from a deuterium hollow cathode lamp. The intensity-corrected lines were then sorted by their upper levels and their intensities were normalized into branching fractions using another specialized python program. By combining these branching fractions with known lifetimes for the upper levels of Cr II, a table of the transition probabilities was constructed for 15 upper levels of Cr II.

Richard Steiner

Characterization of Fluke 8588 for Digital Power Measurement

The immediate purpose of this project is to characterize the Fluke 8588 Digital Multimeter’s (DMM) behavior when measuring low frequency (10 Hz – 1 MHz) AC voltage and current from digitized data. The process of calculating the root mean squared (RMS) voltage of AC signals using optimized discrete samples is known as Swerlein’s algorithm. The goal is to evaluate the error in measurements produced using this technique, determine possible sources of this error in the electronics or implementation, and create applicable correction factors to extend the DMM’s range of accuracy (within 0.01%). The reconstruction step can be done either by fitting a sum of harmonic sine waves to the samples (4-parameter sine fit) or by evaluating the Fast Fourier Transform (FFT), and an additional goal is to compare the effectiveness of each method. Errors in the data can be dependent on a number of parameters including input frequency, sample rate, and aperture time. Errors that already have well-defined correction factors, such as for aperture time and lowpass filter bandwidth, are applied to the data and evaluated for their ability to correct the signal relative to the calibrator output. The remaining error is then fit to possible characteristic equations and attributed to properties of the F8588 or calibrator. Initial data was taken with respect to sample rate and frequency to determine a valid test parameter space. Then, RMS voltage data taken in each DMM range over the relevant frequencies was processed with known corrections, and the remaining error was shown to have higher order behavior at high frequencies, with a ‘resonance’ near 5 kHz in some cases. Further knowledge of the calibrator and DMM electronics is needed to fit a characteristic equation to this error. In conclusion, while a method for determining the error was implemented, more information about the source of the error is needed to develop a correction factor. The sine fit and FFT methods must be investigated further as well to account for errors they introduce. Once the DMM is characterized, the overall goal is to utilize it to calibrate AC outputs from calibrators and more accurately measure the harmonic content of AC power distorted by non-linear loads like LED’s.

Alan Zheng

Optimization of Low and High pass Cutoff Parameters for Firearm Population Statistics

Within the criminal justice system, there are several automated algorithms being utilized to compare forensic evidence (e.g. DNA and fingerprints). NIST have been developing objective algorithms for firearm and toolmark comparisons for the past decade. The primary goal of this research is to provide forensic examiners with an objective similarity metric and associated weight of evidence estimations that can be used in court testimonies.

When a semi-automatic firearm is fired, the cartridge case is ejected, and is left with an impression of the firing pin and breech face. These are some of the unique toolmarks that examiners use to identify the source firearm. The NIST algorithms developed to quantify the similarity of these toolmarks are Congruent Matching Cells method (CMC) and the Areal Cross Correlation Function (ACCF). The CMC method breaks up the topography image into an array of square cells. Then, with specified thresholds and parameters, these cells are matched to corresponding areas on the questioned cartridge case. Similarly, the ACCF method compares the entire measured topography area.

My project consisted of optimizing correlation parameters to maximize the separation between known matching and known non-matching populations of scores. Additionally, the Kruskal Wallis test was used to test if there were any significant differences between populations due to modifying a parameter. The primary parameters of interest in my research are the Gaussian Regression low and high pass filter cutoffs. The Gaussian Regression filter is used to extract the toolmarks of interest by removing long wavelength waviness and short wavelength noise from the data. To optimize these two filters, correlations were conducted with a combination of Gaussian Regression high and low pass cut offs using a full factorial design. To analyze the data outputted from the surface correlator software, an R script was written to display basic statistics, along with graphs to help determine the degree of separation between the known match and known non-match populations. The presentation will display the optimal parameters that provide the greatest separation in CMC and ACCF scores for known matching and known non-matching populations.

Zachary Levine

Conditions for a Single Photon Echo

The development of photonic quantum memory in rare earth ion crystals using the atomic frequency comb protocol relies on photon echoes. Modelling these photon echoes provides insights which may enhance future experimental efforts. In this work, we model the dielectric function of the crystal to determine how this affects photon echoes. We illustrate the effect that altering the ratio of free spectral range to full width at half maximum (finesse) of dielectric function has on the photon echo. When the finesse is much greater than one, photon echoes go on for a long time. When finesse is approximately equal to one, there are a small number of photon echoes, and perhaps only one photon echo, a necessary feature for high efficiency quantum memory. As an aside, we also present a few notes on the Hamiltonian for Pr:YSO which may be used as a quantum memory system.

Jared Wahlstrand

Analysis of Complex Multidimensional Optical Spectra by Linear Prediction

Multidimensional coherent spectroscopy (MDCS) is a versatile technique for exploring light-matter interactions in semiconductor nanostructures and can provide a more complete understanding of photonic and electronic phenomena to be exploited in optoelectronic applications. MDCS is based on multi-pulse time-domain third-order nonlinear optical spectroscopy, known as four-wave mixing, from which the signal is converted into multiple frequency dimensions to separate complex and overlapping spectral contributions. The optical signal is typically analyzed in the frequency domain through a discrete Fourier transformation (DFT), which computationally deconstructs the temporal oscillations into multidimensional peaks representing the constituent contributions of sinusoids at each frequency. When peaks are located at nearby frequencies, weaker features of interest can be obscured by the tails of stronger peaks. Here, linear prediction from singular value decomposition (LPSVD) is implemented, which uses a non-iterative linear fitting procedure to fit the time-domain signal to a model of the sum of damped sinusoids. Because the fitting is linear, it is not necessary to guess initial fitting parameters, as in the more common method of nonlinear fits. We apply LPSVD to the analysis of zero-, one-, and two-quantum two-dimensional spectra from a III-V semiconductor microcavity in order to separate the strong exciton-polariton response from weaker biexciton features. It is shown that LPSVD reduces noise, eliminates distortions inherent in DFT algorithms, and isolates and allows for the analysis of weak features of interest. Additionally, we show that LPSVD handles highly non-ideal peaks without requiring a predetermined analytical model, as in nonlinear fitting, by fitting non-Lorentzian peaks with multiple Lorentzians. Applying this method to the semiconductor microcavity will allow for precise determination of nonlinear optical interactions in these complex devices.

Michael Gaitan

Pendulum and Rotational Calibration Comparison of Three-Axis Gyroscopes

A three-axis accelerometer and gyroscope has the ability to measure acceleration and rotation changes occurring about each axis in a cartesian system. Over the past 11 weeks, we have developed a unique approach to calibrate the accelerometer and gyroscope integrated on an Adafruit Feather nRF52840 Sense. This device, capable of of sending accelerometer and gyroscope data via Bluetooth, was studied and analyzed using software packages including the Arduino IDE, Python 3, and Excel. A new calibration approach for calibrating gyroscopes was developed using a pendulum as the excitation source. The device was calibrated and compared to a calibration using a 2-axis angular position and rate table with constant rotation excitation. Static conditions were also considered in order to determine the offset of the device and ultimately to quantify its sensitivity. This report will focus on the comparison of the calibration using the pendulum and the rate table. As each device can not be manufactured exactly the same, the team aimed to compare readings of offset and sensitivity against each other and against the manufactures data as well.

Yaw Obeng

Mining of Solder Joint Reliability Data to Identify Possible Precursors to Catastrophic Failure

Solder joint failures found in electronic systems fall into two main groups: hard failures and no failures found (NFF). NFF failures are responsible for half of all failures in solder joints. The focus of this research is to detect NFF failures before complete /hard failure. By determining how solder joints fail in NFF failures, the Department of Defense could save upwards of $2 billion annually. Sample boards were analyzed in an thermally controlled chamber cycling between 0 ℃ and 100 ℃, while monitoring the electrical properties of the joints. Comparing the cycle number, corresponding temperature, and microwave signal return loss (S11) output, distinct breaks were observed in the S11 vs time plots. These break points appear to relate to different events occurring in the solder joints prior to complete failure. While we do not fully understand the physico-chemical nature of these events, they possibly include the recrystallization of the solder joint alloy. Further research is ongoing into further understand the events seen during the overall failure picture.


Created December 15, 2021