Skip to main content
U.S. flag

An official website of the United States government

Official websites use .gov
A .gov website belongs to an official government organization in the United States.

Secure .gov websites use HTTPS
A lock ( ) or https:// means you’ve safely connected to the .gov website. Share sensitive information only on official, secure websites.

2017 IFSEMS Abstracts

error mgmt symposium banner

 

James Doyle | Lynn Garcia | Scott Shappell | Jenifer Smith | Peter Stout | Anthony Tessarolo | Linzi Wilson-Wilde | Craig Beyler | Dawn A. Boswell | Mehzeb Chowdhury | Sarah Chu | Sabrina Cillessen | Lynn Garcia (Panel 1) | Lynn Garcia (Panel 2) | Melissa Gische | Heike Hoffman | Vici Inlow | Amy Jeanguenat 1 | Amy Jeanguenat 2 | Jim Jones | Karen Kafadar | Roman Karas | Paulo Kunii (FHE Methodology) | Elizabeth Laposata | Ranjan Maitra | Marcel Matley | Ashraf Mozayani 1 | Ashraf Mozayani 2 | Robert Nelms | Jennifer Newman | Anthony Onorato | Imani Palmer | Sandra Rodriguez-Cruz | Sandra Sachs | Ivana Sesardic | Carolyn R. Steffen | Harish Swaminathan | Nicholas Tiscione | Richard Torres

 

  • James Doyle
    Consultant, National Institute of Justice
    Sentinel Event Reviews:  Exploring Outside the Stovepipes
    Most crimes are prosecuted in state courts, and most forensic work is done by state, county and local laboratories. The National Institute of Justice is supporting through its "Sentinel Events Initiative" an exploration of the potential for integrating into criminal justice practice a system-oriented approach to analyzing in non-blaming, all-stakeholders, all-ranks, reviews significant events, "near misses", "good catches" with the goal of promoting "forward-looking accountability." This approach may hold promise for improving practice, but also for fostering legitimacy and community respect for the law. The presentation will discuss the origins and prospects of this approach. By focusing on prospective accountability and individual responsibility for collective outcomes, this approach will encourage a culture of continuous improvement. This presentation will provide an account of challenges encountered, and of creative approaches to surmounting those barriers and will assess the potential for a paradigm-shifting mobilization of safety concepts in criminal justice. 
     
  • Lynn Garcia
    Texas Forensic Science Commission
    Courage Comes First
    Most crimes are prosecuted in state courts, and most forensic work is done by state, county and local laboratories.  In Texas, the experience of the state's forensic commission has shown that stakeholder collaboration, a prosecutor's association driven by a commitment to transparency, and the willingness of crime laboratory leadership and forensic practitioners to face tough scientific issues are all key components in ensuring the integrity of forensic science.  By fostering a culture of continuous improvement and embracing a shared obligation to identify and correct potential miscarriages of justice, Texas has emerged as a national leader in forensic reform without compromising its commitment to law and order.  This presentation will provide examples in areas like complex DNA mixture analysis, bite mark comparison, forensic video analysis, microscopic hair examination and firearm/tool mark analysis.
     
  • Scott Shappell
    Embry-Riddle Aeronautical University
    Safeguarding Excellence in Forensic Science
    With modern day advances in science and technology, professionals in a variety of complex fields such as aviation, healthcare, nuclear power, and the forensic sciences are operating at unprecedented levels of excellence. Consequently, when errors are made they are often the end result of a cascade of human factors at the organizational, supervisory, and bench level. Managing these threats to excellence is therefore fundamental to maintaining the viability and credibility of any organization. This panel presentation will briefly describe innovative methods for managing human factors associated with forensic investigations that are scientifically derived, empirically tested, and proven in the field. Specifically, the Human Factors Analysis and Classification System (HFACS) is a system-safety model that effectively bridges the gap between human factors theory and applied human factors analysis. It is a proven methodology for reliably identifying and analyzing human error in complex systems. The HFACS framework provides a clear understanding of the reasons errors occur so that effective intervention programs can be developed.
     
  • Jenifer Smith
    District of Columbia Department of Forensic Sciences
    Seeing the Error of Our Ways: Perspectives from Department of Forensic Science, Washington DC
    Dr. Jenifer Smith, Director of the Department of Forensic Sciences (DFS), will address the steps taken to rebuild and strengthen the forensic programs at DFS following suspension of DNA testing in April, 2015 by Muriel Bowser, Mayor of Washington, D.C. The suspension followed two damaging reports issued by ANSI-ASQ National Accreditation Board (ANAB) and DNA consultants hired by the United States Attorney's Office. Dr. Smith will discuss issues identified within the reports, the root cause analyses and the implementation of quality actions taken to revamp quality practices affecting all divisions at DFS. Smith will also discuss the changes made that shape the current DFS error management practices currently geared to reveal and mitigate errors that inevitably occur in any actively engaged laboratories charged with protecting the safety and health of their citizens.
     
  • Peter Stout
    Houston Forensic Science Center
    The Recovery of Houston: The successes and challenges of the ongoing remediation of forensic science in Houston
    The Houston Police crime laboratory operation was well known as an effectively failed operation.  In 2012, HPD and the city of Houston took the unique approach of extracting the forensic operations into a semi corporate structure.  Between 2012 and 2014, a 9-member independent board of directors worked with HPD to set up the governance framework that would become Houston Forensic Science Center.  In April of 2014, the corporation assumed management responsibility for an organization that covered biology, toxicology, controlled substances, multimedia and digital, latent prints, firearms and crime scene investigation. Approximately 12,000 backlogged requests and a total average turnaround time of 140 days with only biology, toxicology, firearms and controlled substances accredited was the starting point. All corporate policies, business systems and policies had yet to be built. The corporation had 5 corporate employees 85 City of Houston employees and 48 sworn, classified officers. There was a very significant disclosure being investigated by the Texas Forensic Science Commission.   Three years into the effort (March 2017 at the writing of the abstract). HFSC now has 3,000 backlogged requests.  These backlogs are ~500 requests in biology for work other than Sexual assault cases and 2900 latent print requests almost entirely property crime cases.  Total average turnaround time is about 45 days. HFSC now has 140 corporate employees, 20 city of Houston employees and 20 classified officers. Business policies exist.  Safety and security systems have been built and implemented. Grant management systems have been built and successfully been audited. All while completing approximately 95,000 requests in the three-year period and with the workload increasing. A significant accomplishment. Three example projects help frame the possible.  Implementation and growth of blind quality control systems, making as many documents as possible publically available and searchable (eDiscovery) and a Lean Six Sigma re-engineering of biology.  Each of these efforts, while only part of the whole help illustrate what has been possible in the concurrent improvement in productivity AND quality measures and how a focus on error management and quality has driven improvement and efficiency.  The presentation will use these three programs to discuss how the quasi-corporate structure made them possible, what they have helped accomplish, unexpected results and ultimately limitations and challenges. Lastly, the talk will cover what have been the biggest challenges.  Specifically, remediation of information and data systems, facility and cultural transition. These present significant ongoing challenges where we have made progress, but still are seeking sustainable solutions. The recovery path for significant systematic errors in forensic laboratories is feasible.  It is however a long multiyear undertaking that is challenging on many fronts. 
     
  • Anthony Tessarolo
    Ontario Centre of Forensic Sciences
    Doing the Right Thing When You're Wrong 
    Why do significant errors continue to occur in forensic laboratories despite a commitment to robust accreditation standards?  Twenty years ago, the Centre of Forensic Sciences (CFS) in Toronto, Canada played a significant role in the wrongful conviction of Guy Paul Morin. The subsequent public inquiry (ie. The Kaufman Inquiry) resulted in over 30 recommendations aimed at improving the accuracy, reliability and objectivity of CFS services. In this presentation, the implementation of those recommendations is reviewed and the efficacy of an enhanced quality management system is explored. The application of lessons learned to the handling of more recent laboratory errors is discussed. 
     
  • Linzi Wilson-Wilde
    National Institute of Forensic Science at Australia New Zealand Policing Advisory Agency
    Error Management Issues and Activities in Australia and New Zealand and the Role of the National Institute of Forensic Science
    The National Institute of Forensic Science Australia (NIFS) was established in 1992 and has five roles: coordination of cross-jurisdictional projects, information exchange, research and innovation, education and training, and quality assurance. NIFS is funded and governed by all government forensic service providers of Australia and New Zealand and ultimately reports to the Police Commissioners of the two countries. NIFS operating framework is underpinned by a three year Strategic Plan comprising programs of work approved by the Police Commissioners. The Strategic Plan is given effect through the implementation of an annual Business Plan developed in conjunction with the NIFS governance group; the Australia New Zealand Forensic Executive Committee (ANZFEC). The Business Plan details projects and activities conducted or facilitated by NIFS. NIFS is required to report quarterly on the projects and activities as outlined in the Business Plan. An important area of reporting concerns quality assurance. NIFS has been the conduit point for all proficiency tests purchased and analysed by Australian government laboratories since 1992, runs a competency-based certification program for crime scene, fingerprints and firearm practitioners (established in 2003), and has been involved in the development of national and international standards for forensic science since 2008. NIFS also produces an online proficiency test specific for crime scene examiners. The After the Fact test has been produced by NIFS since 1999. Therefore, NIFS has a long history of active involvement in quality and holds a wealth of data dating back years. This data comprises various disciplines including crime scene, drug analysis, fingerprints and biology/DNA and represents, possibly, the largest collation of data to date internationally. The data collectively provides an opportunity to identify and measure the quality and error specific to each of the disciplines within Australia. The results of this and all of NIFS activities in quality assurance will be presented.
     
  • Panel Presentation: Overcoming the Next Crisis—Media Relations and Crisis Communications in the Forensic Services Sector (no presentation files, see video here.)
    • Moderated by Gail Porter, National Institute of Standards and Technology
    • LaShon Beamon, Washington, DC Department of Forensic Sciences & The Office of the Chief Medical Examiner
    • Julie Bolcer, NYC Office of Chief Medical Examiner
    • Ramit Plushnick-Masti, Houston Forensic Science Center
       

Breakout Session Presentations
 

  • Craig Beyler
    Jensen Hughes, Inc.
    ISO 17020 is a Bad Fit for Forensic Science Investigation
    The essence of forensic science investigations is coming to the investigation with an open mind, collecting data objectively, and using the scientific method to reach conclusions. The investigation should be accomplished objectively, truthfully, and without expectation bias, preconception, or prejudice. ISO 17020 represents the antithesis of this approach to investigation. ISO 17020 was written as a conformity assessment standard, not as a forensic science investigation standard. In 17020 the user comes to the scene with a definite and well defined preconception and the goal of the assessment is to determine if the scene is consistent with the preconception. This is an entirely appropriate mindset if you are inspecting an existing bridge for flaws or inspecting a building for fire code violations. It is not appropriate for forensic science investigations. Many have suggested that supplemental requirements can be used to remedy the differences between investigation and conformity assessment. Indeed, this approach has been tried and it results in confusion and a lack of transparency. The resulting accreditation is a 17020 based accreditation, with all the meaningful and relevant requirements buried in supplemental requirements no user will ever see. When you say a forensic science investigation unit is accredited to 17020, you know nothing about the standards employed in the unit. They exist in the supplemental requirements that are subject to change on a case by case basis. The result is a meaningless accreditation. The International Laboratory Accreditation Cooperation (ILAC) has recommended the use of a conformity assessment standard in forensic science, going back as far as 2002. While no doubt good intentioned, the result is a bad fit standard with forensic science. The forensic science community should be writing its own standards and should not simply accept an existing bad fit standard which is familiar to accreditation organizations.
     
  • Dawn A. Boswell
    Tarrant County Criminal District Attorney’s Office: Conviction Integrity Unit 

     

  • Mehzeb Chowdhury
    Durham University

    Take a Cue From NASA: The Real Value of 3D and 360 Imaging in Crime Scene Investigation and the Investigative Process
    Courts have traditionally relied on forensic science units to produce visual evidence in court as an alternative to crime scene visits. Crime scene investigators (CSIs) gather and use evidence to recreate the precise sequence of events that occurred during the course of a crime. Part of this reconstruction process is photography and sketching, with the latter still largely done by hand. Photos give a limited picture of the crime scene, restricted by the photographer's field of view and subject to their interpretation of the scene and the importance they place on different pieces of evidence. Video can capture more of the scene but is still limited in its field of view. Sketches lay out the scene in a way that neither photographs nor videos can. They provide a general overview of the scene and the precise and relative location of evidence. But they also give an inherently less realistic representation of the crime scene, determined even more by the artist's interpretation. Similarly, photos and videos can be turned into 3D computer animations but again are subjective, and can even be tailored to support the case of whichever side is presenting them. Are there any alternatives? In this paper, I explore the latest innovations in 3D and 360° imaging at crime scenes, as well as the scope and feasibility of adopting commercial technology to criminal justice endeavors. Could artificial intelligence, robotics, virtual and augmented reality assist us to be better CSIs? It will be presented, how using the latest of these innovations, crime scene managers could review an entire investigation as if he or she were already present at the crime scene, evaluating the performance of individual crime scene examiners, and providing feedback accordingly. As a means of overall quality control within crime scene settings, it could prove invaluable. 360° video could also provide an extra layer of protection for crime scene examiners, if the credibility and reliability of their professional judgment and acumen is called into question during a trial. If the recording device is placed from a vantage point where the entire crime scene can be seen, then it can be argued that the crime scene team has nothing to hide, thus giving their work transparency in the process of evidence identification, collection and storage. In error-management systems, 360° imaging could allow for the early detection of flawed techniques, and be a useful tool in promoting and showcasing good practice. These issues and others will be investigated by closely tying in broad scientific innovations with the possible utility of these technologies to crime scene work. These could, not only allow investigators to revisit crime scenes as they were at the time of the initial forensic examination, but also add a further dynamic to the scene documentation process which may be invaluable in contextualizing both physical and behavioral evidence.
     

  • Sarah Chu
    Innocence Project
    Assuring Quality Helps Ensure Justice
    Forensic science errors that affect the integrity of the results of testing are generally inadvertent and can result from the built-in error rate of a method, simple mistakes, and sometimes negligence. The misapplication of forensic science is a contributing factor in 46% of the wrongful convictions overturned by forensic DNA testing.  Although it certainly happens, the integrity of the forensic product less frequently results from misconduct.  Misleading testimony can also trigger corrective action. Frequently, when an error, nonconformity, or adverse event is made by a forensic science service provider (FSSP), the public learns of its occurrence long after its incidence and is assured by an authority that a review of cases was undertaken and no wrongful convictions were found.  This opaque approach to addressing laboratory error does not improve public trust and reduces the FSSP's capacity to respond to error in a meaningful and transparent way.   The criminal justice system's significant dependence on forensic evidence also compounds the FSSP's response  as errors, nonconformities, or adverse events have the unique capacity to touch a high volume of cases. How then, can an FSSP’s quality management system improve public trust and ensure justice when an FSSP must declare that it was responsible for erroneous laboratory reports, an adverse event, or nonconforming testimony? This presentation discusses four essential quality management and workplace accountability strategies that can help address misapplications of forensic science and restore justice to defendants.  First, the FSSP should implement a just workplace culture that anticipates that errors happen, captures them before they snowball, and facilitates a corrective process when they do become critical.  Second, the root cause analysis process adopted by the FSSP must rigorously seek systemic-level causes and should include a disclosure system to help others learn from an FSSP's error, nonconformity, or adverse event.  Third, retrospective reviews should be comprehensive (with every effort made to identify all affected cases), conducted by an independent external entity, and incorporate a public reporting component.  Lastly, every defendant deserves to receive notice when an error, nonconformity, or adverse event affects the integrity of the results in his or her case.  Problems should not have to rise to a wrongful conviction to merit notification.  FSSPs should work with institutional criminal justice stakeholders to develop a judgement free notification process that leaves materiality decisions to the courts.   Forensic science is a human endeavor and it is unfair to expect perfection from forensic scientists.  However, knowing that errors are inevitable, every participant in the forensic testing process has an ethical duty to correct errors, adverse outcomes, or nonconformities, and work with the stakeholders in the criminal justice system to notify the affected defendants and victims.   Comprehensively and transparently addressing errors improves FSSP capacity to learn from what took place, provides defendants with the opportunity to consider the impact of these errors in their cases, and will help ensure that the application of forensic science is more accurate and more just.
     
  • Sabrina Cillessen
    Virginia Department of Forensic Science
    A Novel Approach to Addressing Changes of Opinion in Latent Prints
    The presentation will focus on the Virginia Department of Forensic Science approach in the Latent Print Section; a change of opinion during the examination and documentation process is not an error.  The robust verification procedures employed in the Latent Print Section have ensured accurate results are reported to investigating agencies.  All comparison conclusions are verified, with approximately 10% blind verified. All verification results are independent documented conclusions, which demonstrates reproducibility.  If the conclusions differ, the examiner and verifier discuss and review the examination documentation which includes objective evidence of the basis for the conclusions.   The shift from a change of opinion being a mistake or error to being part of the analytical process has enhanced a culture of openness and trust within the section.  The reason for a change of opinion is documented in the case file and may be used to provide training to the entire section.  Such training has included image enhancement techniques and approaches to latent print orientation.  The theory that blind verifications are more robust due to the elimination of confirmation and contextual bias will also be discussed. Changes in opinion occur in open verifications, where the conclusions of the examiner are known to the verifier, as well as in blind verifications.
     
  • Lynn Garcia (Panel1)
    Texas Forensic Science Commission
    Responding to the Call for Transparency in Forensic Laboratories: a Collaborative Approach
    In 1986, Michael Morton's wife Christine was brutally murdered in their home in the presence of their three-year old son. Upon returning home from work in the afternoon, Morton discovered he was the prime suspect. He was arrested six weeks later and sentenced to life in prison. Prosecutors withheld key evidence that was later discovered by Morton's pro bono lawyers and the national Innocence Project. After Morton served 25 years in prison, DNA evidence analyzed by the state's crime laboratory exonerated him. Soon after his release, Morton became the face of prosecutorial reform in Texas. In his words, "revenge doesn't work, but accountability does." That accountability came in the form of the Michael Morton Act, which requires the State to disclose to the defendant any exculpatory, impeachment, or mitigating document, item, or information in the possession, custody, or control of the state that tends to negate the guilt of the defendant or would tend to reduce the punishment for the offense charged. Most Texas prosecutors, long known for their tough on crime stance, were deeply moved by the facts of Morton's case, especially as more information was revealed about the root cause of his wrongful conviction. After Governor Rick Perry signed his namesake legislation, Morton spoke to prosecutors from the Texas District and County Attorney's Association, the largest association of prosecutors in the world.  In the words of the organization's executive director, Morton's speech was a tremendously gracious and moving and compelling talk. The Michael Morton Act set in motion a major cultural shift in Texas. Not long ago, seasoned prosecutors would commonly debate the materiality prong of Brady v. Maryland often to conclude the information did not need to be disclosed. Today, a majority of both prominent leadership and rank-and-file prosecutors have developed a wholehearted committed to transparency and disclosure in criminal cases without consideration for materiality of the information. Texas laboratories share this commitment to fairness and transparency, but the logistical implementation of the commitment has been challenging due to the sheer volume of information involved. This is especially true when it comes to the operations of forensic laboratories considering most lawyers (defense and prosecutors) have very little understanding of what actually goes on in a crime laboratory or the tremendous amount of data underlying laboratory operations. It is easy for a prosecutor to say "give us everything," but what does that really mean? This panel will discuss what impact the changes in prosecutorial culture have had on forensic laboratories in Texas. Using two case examples--one in blood alcohol and the other in DNA analysis--panelists will discuss what steps Texas has taken to identify the information in crime laboratories that needs to be disclosed. Perhaps everyone can agree that non-conformances in a particular case file should be disclosed, but what about when an examiner is testifying in one case but had a switched sample in a different case? What about failed proficiency tests or an examiner being removed from casework temporarily in the course of a quality event? What happens when prosecutors have doubts about the laboratory's explanation for an analytical process or deviation from an SOP? What can laboratories do to help educate their client base as well as the broader community, especially the judiciary?  Can technology help with discovery and disclosure? What is the laboratory's obligation if it believes a prosecutor is not being forthcoming with defense counsel or the judge? The panel will explore these and other current trends in discovery and disclosure, offering solutions and lessons learned for a critical issue with broad national implications.
  • Lynn Garcia (Panel2)
    When Checks and Balances Fail: Lessons Learned from the Austin Police Department's DNA Laboratory
    In May 2015, the Texas Forensic Science Commission ("Commission") focused its attention on DNA mixture interpretation in Texas criminal cases, due to national confusion regarding the appropriate use of a statistical method known as the Combined Probability of Inclusion/Exclusion.  Part of the review was to examine the protocols and case samples of publicly funded accredited laboratories in Texas. During the Commission's review of one laboratory's protocols (the Austin Police Department (APD) DNA section), the Commission identified some fundamental misunderstandings in DNA concepts that led to additional questions regarding the scientific leadership in the laboratory.  The misunderstandings were exacerbated by unclear information published in SWGDAM guidelines, as well as the misinterpretation of certain language in Dr. John Butler's textbook entitled Advanced Topics in Forensic DNA Typing: Interpretation.   The Commission's concerns were further amplified by a particular sexual assault case with strong indications of carryover contamination brought to the Commission's attention by the Travis County District Attorney's office. In the case, the penile swab of a suspect who had no involvement in the sexual assault was most likely contaminated by low level DNA from the victim due to the positioning of the evidence while it was being analyzed.  After a May 2016 audit of the laboratory, the Commission issued a report identifying fundamental issues in DNA analysis at the APD laboratory, including insufficient validation studies, unsupportable criteria for assessing stochastic effects in mixtures, likely carryover contamination that failed to trigger a quality event, among other issues. The Austin Police Department voluntarily removed forensic biology (including DNA) from its accreditation scope, and is currently in the process of assessing the best path forward for the laboratory. The panel will provide the audience with an understanding of the issues observed in the DNA section and a discussion of their root cause. Panelists will explore questions regarding the relationship between the District Attorney's office, the laboratory and the police department, and the extent to which prosecutors and law enforcement management need to understand scientific concepts in order to flag key issues.  The group will discuss efforts by stakeholders at the city, county and state level to work collaboratively in identifying solutions to restore the public's faith in the laboratory and move the community forward.   Panelists will also discuss notification and disclosure issues, as well as how cases are being triaged for retroactive review to ensure the best possible use of public resources.  Finally, the panel will discuss the role of the broader forensic community in enabling these types of situations to occur, particularly the responsibility of the accrediting bodies, SWGDAM, the FBI's QAS and other systemwide checks and balances. What happens when those checks and balances do not work? How is it possible for a laboratory with well-intentioned analysts and managers to make these types of fundamental mistakes? And most importantly, why were none of the core scientific issues identified during the many assessments and audits to which the laboratory was subject over the last 12 years?   Panel members will discuss the implications of these questions and the lessons learned at APD for other DNA laboratories and the forensic DNA community as a whole.  
  • Melissa Gische
    FBI Laboratory
    Consensus Panels for Conflict Resolution in Latent Print Examinations
    No one likes conflict or disagreement -- that butterfly feeling in your stomach when another examiner says the dreaded words "I don't agree with your conclusion." But disagreements are a part of science and a part of the forensic sciences.  What if there was a conflict resolution process that allowed for differences of opinion to be retained while still producing a technically sound decision?  The FBI Laboratory Latent Print Unit is experimenting with just such a process which incorporates examiner discussion, consensus panels, and transparency in both the method and the records. This presentation will discuss the FBI Latent Print Unit's use of consensus panels for conflict resolution.
     
  • Heike Hoffman
    Center for Statistics and Applications in Forensic Evidence
    Investigation of Error Sources in 3D Microscopy of Bullet Lands
    3d microscopy has been shown to be a very promising approach for automatic matching algorithms that seem to hold across various sources of error, such as operator, microscope and resolution. Here, we are investigating more closely the sources of error of microscope parameter settings, staging and operator effect, as well as between scan differences. The experiment is set up as a split-split-split plot design with scores for known matches and known non-matches as the response.
     
  • Vici Inlow
    United States Secret Service
     
  • Amy Jeanguenat 1
    Mindgen, LLC
    Can Forensic Laboratories Reduce Contextual Information Bias and Increase Happiness at Work?
    A common suggestion to overcome the effects of case specific/trace influence biases is to control case information flow to the forensic examiner. Controlling of information has been suggested by putting into place quality assurance measures for identifying task relevant and task irrelevant information to limit contextual information that may bias the examiner. Other similar approaches involve using a case manager role to be the main point of contact for those involved with the investigation leaving the examiner completely independent of case discussions. Critics of limiting contextual information voice concerns to include: some information that might be contextually biased is needed to perform a critical examination of the evidence; limiting communication may have an adverse effect on relationships with stakeholders; and that being involved with case specific information brings increased engagement to personnel- a reason many established careers in the forensic field. Negating the latter, employee engagement, can further lead to decreased production, poor decision making, and increased errors. Therefore, employee happiness is an important topic for discussion when trying to control for bias or implementing other changes in the laboratory. If case involvement has been a motivator in the past, attendees will learn scientific methods to offer alternative approaches to engage employees. This presentation will introduce suggestions from research in positive psychology and neuroscience on the best methods to connect employees with the purpose behind their position, how to motivate employees, and keep them engaged by creating and sustaining a positive culture. By implementing alternative suggestions to increase engagement, there can be a determination if controlling contextual information to reduce bias may be possible.
  • Amy Jeanguenat 2
    Improving Forensic Quality Through Forming a Constructive Relationship with Stress
    Over the past decade there has been a shift and growing openness on how forensic examinations, quality of work, and error management are effected by human factors.  Understanding and correctly managing human factors can enhance laboratory quality as well as improve decision making ability of the forensic scientists. Nevertheless, despite studies across multiple industries on workplace wellness and stress, this area of human factors has been neglected within the forensic science domain. Forensic scientists work in a dynamic environment that includes common workplace pressures such as:  workload volume, tight deadlines, lack of advancement, number of working hours, low salary, technology distractions, and fluctuating priorities. However, in addition, forensic scientists encounter a number of industry specific pressures, such as: technique criticism, repeated exposure to crime scenes or horrific case details, access to funding, working in an adversarial legal system, and zero tolerance for ‘errors’. When stress factors are repeated or constant, performance and productivity tend to decrease due to physical and psychological phenomena. Thus, stress becomes an important human factor to mitigate for overall error management, productivity and decision quality. Techniques such as mindfulness can become powerful tools for a forensic scientist to employ to enhance decision making and resilience. Such resources can be cultivated throughout the day and activated during stressful moments to mediate their effects. Through this presentation, attendees will obtain a better understanding of their relationship with stress and how it may affect the quality of their work. Stress reduction techniques, such as present moment awareness, will be introduced as a way to improve resilience. 
  • Jim Jones
    George Mason University
    The Uncertain Relationship between Residual Digital Fragments, Files, and Computer Activity
    Recovery of complete, intact digital files from a forensic image is a staple of the digital forensics process, but such recovery is not always possible and intact files alone may not paint a complete picture of past activity. Consequently, digital forensics investigations often incorporate partial file contents as evidence, such as fragments created when a file is deleted and some but not all of the original data contents are overwritten. But digital fragments are not necessarily unique to a single source file. Much like a partial fingerprint may or may not match multiple full prints depending on the nature of the partial print and the universe of full prints, a partial file may or may not match multiple full files depending on the nature of the fragment and the universe of full files. Prior work has demonstrated the probative potential for single digital fragments. For example, different files created by the same application frequently share header and footer content, and low entropy fragments are rarely discriminatory. On the other hand, fragments from the data portion of compressed video files are frequently unique to a particular video source file. We are extending this prior work to incorporate multiple fragments from different files. When a user or agent on a computing device performs some action, such as installing and using a particular application, multiple files are created and potentially modified. Upon application termination and possible uninstallation, many of these files are deleted and their contents are fully or partially overwritten. It is these multiple residual partial fragments over which we are attempting to reason and infer the past causal activity. As a result, our inference is sensitive not only to the individual relationships between fragments and their source files, but also how the uncertainty in those relationships is combined and affects the overall inference regarding specific past activity. In this talk, we present our methodology for capturing digital artifacts created by various activities, and we discuss our work to date constructing associated reasoning models. These models attempt to capture the uncertain relationships between fragments and whole files, and between collections of whole files and activities. We also attempt to incorporate the frequency of whole files and fragments in the "universe", which in our case is a sample of files and disk images from open sources. We discuss empirical results obtained using a publicly available data set of disk images and known activity, and we discuss the next steps for this work. Near term challenges include how to assess the relationship between a fragment and its parent file, how to aggregate this uncertainty across multiple fragments and files, and how to measure and report the combined uncertainty in the assessment of past activity.
     
  • Karen Kafadar
    University of Virginia
    Statistical Modeling and Analysis of Trace Element Concentrations in Forensic Glass Evidence
    ASTM has published three standards related to different test methods for forensic comparison of glass: micro X-ray fluorescence spectrometry (XRF), ICP-MS, LA-ICP-MS. Each standard includes a series of recommended calculations from which "it may be concluded that the samples are not from the same source." Using publicly available data from Florida International University and from other sources, we develop statistical models based on estimates of means and correlation matrices of the measured trace element concentrations recommended in two of the three ASTM standards, leading to population-based estimates of error rates for the comparison procedures stated in these standards. Our results therefore do not depend on internal comparisons between pairs of glass samples, the representativeness of which cannot be guaranteed, and our error rates are estimated based on a wide range of assumed distributions for the measurement error. Thus our results apply to glass samples that have been or can be measured via these technologies.
     
  • Roman Karas
    FBI Laboratory
    Establishing Traceability
    The presentation introduces the concept of measurement traceability, beginning with the origins of the science of metrology. The need for uniformity and consistency in measurements is not unique to forensic science, but also is required in research, industry, and commerce. Measurement traceability brings with it confidence and reliability, factors which are critical in today's forensic science environment. The FBI Laboratory's practices will be explained, tying in the concepts of measurement traceability through actual examinations and ASCLD/LAB and ISO/IEC 17025 requirements. The audience will be more familiar with ISO traceability requirements and how they may impact example disciplines.  ASCLD/LAB "board interpretations and guidance documentation" are also discussed from a laboratory perspective. Examples from disciplines are incorporated to illustrate how metrology, measurement traceability and measurement uncertainty can be addressed in the participant's laboratory environment.
     
  • Paulo Kunii (FHE Methodology)
    Brazilian Federal Police
    Proposal for a Methodology to Manage Contextual Information in Forensic Handwriting Examination Based on Digitized Documents
    Forensic handwriting examinators (FHE) perform, most of the time, the comparison of questioned and specimen writing to determine whether they were or not produced by the same author. Both questioned handwritings and writing standards may be present in a wide variety of supports, often sorrounded by printed or written information which are irrelevant to the task of handwriting examination. According to Dr. Itiel Dror, Ph.D., such contextual information is classified in five different levels [1]. This work focuses on levels 1 (trace), 2 (reference material), and 3 (case information). Contextual information is a potencial source of cognitive contamination, which may lead to bias and undermine the reliability of the expert's conclusions. With the intent of minimizing the FHE's exposure to task irrelevant information present in the questioned and standard materials, the author proposes an examination workflow based on the digitization of all documents containing both questioned handwritings and writing standards. Digitization, adjustments, and cropping are in charge of a case manager, who must organize and deliver a batch of file images containing as few task irrelevant informations as possible. The image acquisition process must include all handwriting features considered relevant by the case manager. Thus, the case manager himself must be a FHE with relevant trainment and experience. Basic guidelines on the digitization process and subsequent comparison of handwritings are proposed. The equipment used in the digitization process (scanner, camera, video spectral comparator, microscope etc.), appropriate lighting and camera angles, and the use of illumination with different wavelengths are discussed. Appropriate software is needed to process, enhance and examine the handwriting images. The use of the GIMP, a free and open source image editing software, is suggested, and usage examples are given. By employing the proposed methodology, it is expected that the FHE is less affected by task irrelevant information, thus avoiding potential cognitive contamination and resulting in more reliable conclusions. The adoption of a case manager, responsible for filtering out contextual information from casework, is supported in the literature and is already in place in several forensic units [2]. By examining digital images instead of original documents, it is also possible to better preserve the original properties of the latter. Forensic institutions with two or more units in different locations may also benefit from this methodology, since they will be able to quickly distribute examination requests without the need of sending all casework documentation. In relevant, urgent requests with multiple questioned handwritings and several suspects of authorship, it is also possible to split the request among several FHE's, who will perform a parallel rather than a serial examination. The author believes that this work is a small but relevant contribution to the forensic science community efforts to manage contextual information and prevent cognitive bias. Additionally, when suggesting the use of a free and open-source software, the author also brings to attention the need to develop low cost solutions which may benefit forensic providers in developing countries. [1] Annual Report of the Government Chief Scientific Adviser 2015. Forensic Science and Beyond: Authenticity, Provenance and Assurance. The Government Office for Science, London. (Chapter 4: Cognitive And Human Factors. Dror, Itiel) [2] The management of domain irrelevant context information in forensic handwriting examination casework. Found, Bryan et al. Science and Justice, Volume 53, Issue 2, 154-158.
     
  • Elizabeth Laposata
    Forensic Pathology & Legal Medicine, Inc.

    Error Reduction in Death Investigation:  A Defined Role for the Forensic Pathologist in Scientific Analysis and Reasoning Beyond the Autopsy  
    Errors can occur at many levels during investigation of death. These include crime scene processing and documentation (bloodstain pattern recognition, photography, and evidence collection); laboratory work (DNA, toxicology, trace evidence analysis); and postmortem examination (documentation of all that is possible and necessary regarding the body). However, on a larger scale, final integration of all these elements into a cohesive picture of the incident under investigation that ensures a fair and morally right outcome also requires methods that are subject to error. The forensic pathologist, trained in medicine and scientific analysis and accustomed to developing  differential diagnoses, is in a unique position to ensure competent death investigation by fully participating with colleagues in the integration of all scientific findings and abandoning a mindset that limits his or her role to examination of the body. To identify specific subject areas and develop a working model to integrate the forensic pathologist more closely with the death investigation process, 500 cases referred to the independent consulting firm, Forensic Pathology & Legal Medicine, Inc., Providence, RI, were examined.   Presentation of cases examined will illustrate particular areas where integration of the forensic pathologist into the death investigation is of substantial benefit and can prevent errors. These cases will focus on the analysis of infant deaths, sharp force and firearm deaths, and deaths in custody. Sequencing of events, identifying substantial intervening causes and events, integrating trace evidence analysis and correlating witness statements with events in the death investigation are areas of particular importance for scrutiny by the forensic pathologist. Further, involvement of the forensic pathologist early in the investigation will serve to develop lines of inquiry that result in the best and most economic utilization of testing resources. A triage menu has been developed to identify cases that will benefit most from such analysis by the forensic pathologist. Finally, knowledge of types of errors in thinking that can subtly infiltrate, contaminate and limit the comprehensive analysis of death by the forensic pathologist is also required. These include understanding differences between inductive and deductive reasoning, recognizing the tunnel vision of group think, being aware of the possible existence of silos of sequestered information, and acknowledging the entity of the so-called "wicked problems" that may be  impossible to solve because of various factors.  As a medical doctor, the forensic pathologist is also familiar with the cognitive biases associated with medical patient-oriented decision-making and can apply these skills to death investigation.  
     

  • Ranjan Maitra
    Iowa State University
    Fracture Mechanics-based Match Analysis of Evidence Fragments
    The complex jagged trajectory of a macro-crack for forensic evidence is used to  recognize a "match" by a forensic examiner using comparative microscopy. An analytical and statistical framework is proposed to inform the examiner with a quantitative match decision. The material intrinsic properties and microstructures, as well as the exposure history of external forces on a fragment of forensic evidence have the premise of uniqueness. We exploit these unique features to provide quantitative signatures of the microscopic features on the fracture or torn surface for forensic comparisons. The methodology utilizes 3D spectral analysis of the fracture surface topography, mapped by white light non-contact surface profilometers. Statistical learning tools are used to classify specimens as matches or non-matches. Application of our tools when trained on images from a set of nine knife fragments provided very accurate results when used to predict matches and non-matches from images obtained from a test set of another nine knife fragments. The framework has the potential for application across a broad range of fractured materials and/or toolmarks, with diverse textures and mechanical properties.
     
  • Marcel Matley
    Handwriting Expert Consultant
    The Central Solution to All Unreliability in Evidence
    An elderly trial attorney was asked what accounted for his success. He replied: "Whatever the other side is doling, I pile on the facts as if I am stacking cordwood." The presentation will describe these factors: 1. How is the nature and purpose of evidence like that of cordwood? 2. How are they prepared and stacked similarly? 3. How are they used similarly? 4. How does all this defend against any bias or incompetence defeating your case? 5. How does it expose the faults in evidence by the other side?
     
  • Ashraf Mozayani 1
    Texas Southern University
    Pilot Study in Latent Print Conflict Management
    Verification procedures are typically implemented to mitigate errors in friction ridge analysis; however, the sequential progression through the latent print analytical process, examination of latent print evidence intrinsically lends itself to conflict.  This conflict arises when two examiners do not agree on the analytical determination of value or the evaluative conclusion of identification, exclusion or inconclusive.  As a result, error management of latent print sections has been moved toward conflict resolution policies to outline appropriate action for handling differences of opinion regarding friction ridge impressions. Recognizing the potential for conflict, some agencies implement preemptive policies to mitigate its presence.  Some latent print units utilize a minimum point, or feature standard, for identifying the value of a latent print.  Further, some agencies may institute requirements before definitive conclusions can be reported such as the inclusion of a focal point in the friction ridge impression like a delta or core. In spite of these preemptive standards, differences between examiners remain such as years of experience and training, which may affect the ability to identify feature incidence and type signals from competing factors such as background noise, pressure and distortion effects or processing technique. Accordingly, latent print examiners may consult with each other regarding observations in the impression and the corresponding area(s) of an exemplar noting both similarities and/or differences used to formulate their respective conclusions.  As the culture shifts toward standardization, what approach is the appropriate way to resolve these situations? This study analyzes responses from surveyed latent print examiners in an attempt to develop a recommendation.
     
  • Ashraf Mozayani 2
    Texas Southern University
    Standardization of Alcohol Interpretation
    The objective of this presentation is to discuss aspects and causes of incomplete alcohol interpretation and acquire steps to establish an effective and rigid policy and standard operating procedure that maintains standardized practices across all the analysts.   Alcohol interpretation can be generated using several published peer reviewed methods and by acceptable analytical techniques, traceable and defensible.  Today the forensic scientist are constantly under public scrutiny and it is imperative that forensic laboratories be proactive to develop, implement, and maintain standardized policy for all validated analyses and services they provide to their customers and ultimately to the community. Forensic science laboratories are often faced with DUID and Drug Facilitated Sexual Crime (DFSC) cases involving alcohol.  The key issue in several of these cases encompasses the blood alcohol concentration (BAC) of the individual at the time the incident occurred. Many experts attest to BAC extrapolations following manual calculations using the original Widmark factor (1932) for estimating volume distribution without consideration for successive improvements to the formulae by Watson (1981), Forrest (1986), Ulrich et al. (1987) and Seidl et al. (2000).  This is likely due to perceptions of the complexity to employ the more complicated algorithms which account for gender, age, weight, height, water content, and body mass index (BMI) compared to a single coefficient by Widmark.   The single point BAC result derived by limiting the calculation to a single method does not reflect the entire range of possible values at the time of the incident.  By limiting the calculation to an average of the physiological ranges without consideration of a bounded interval of possible BAC values does not address individual differences and therefore could present incomplete and potentially misleading information to a fact-finder when evaluating whether an specific individual’s BAC was greater than a statutory level at a particular time prior to the direct measurements.
     
  • Robert Nelms
    Failsafe Network, Inc.
    Introspection, the Missing Link to True Root Cause Analysis
    Root Cause Analysis (RCA) has been used in aerospace, nuclear power, oil and gas, chemical, and other industries for decades, but is beginning to get a nasty reputation.  Leading edge thinkers are beginning to call RCA a dinosaur -- a relic from the past that has no significance in our present age of "enlightenment."  The speaker, an internationally recognized "root cause analysis" expert, will suggest why RCA has been getting a bad rap and also will suggest what has to happen to keep RCA relevant in the future.  This presentation will suggest that everything that goes wrong in a system created by humans (including crime labs) can be traceable back to humans.  Therefore asides from what is normally discovered in typical RCA's, when something goes wrong in a crime lab it is imperative for the people involved to answer two questions:  what is it about the way we are, and what is it about the way I am that contributed to this problem.  Until we change "the way we are," and "the way I am," the real root causes of our problems will continue to fester.  Crime Labs in New York State have begun to latch on to this important addition to conventional Root Cause Analysis.
     
  • Jennifer Newman
    Iowa State University
    StegoDB: A dataset for detecting mobile phone steganography
    Sending messages hidden in image data is a way to transmit information in plain sight without calling attention to the fact that a message is being sent. Called steganography, images with or without a hidden message are visually indistinguishable from one another. State-of-the-art of detection of steganography, or steganalysis, is based on training machine learning algorithms (pattern classification algorithms) on large amounts of image data. An image can be classified as having one of two forensic features, then: stego, if an image has the feature of being embedded with a hidden message, and cover, if an image has the feature of being innocent (no hidden message). Here we present our research to detect images that have embedded messages, where these stego images are created using a steganographic app on a mobile phone. Currently, there appears to be very few, if any, forensic tools that successfully can detect such stego images that are created using an app. The database StegoDB, which is well-designed and contains large amounts of authenticated images for benchmarking steg detection algorithms, provides us with data on which we run many experiments. To our knowledge, no other researchers have designed a similar approach to detect stego images using the images generated by mobile phone steg apps. We analyze and present our results in context of prior work done by the academic steganalysis community, and discuss results of a few other more practically-oriented methods published in the literature on general steg detection tools. We discuss the importance of the dataset used for training steg-detection algorithms, and demonstrate the influence of the data on the classification accuracy. We focus on the Android stego app called PixelKnot, a stego app for Android phones. It uses a well-known jpeg-embedding algorithm called F5, developed by an academic researcher [X]. The F5 code is open sourced at Github, and was used by the PixelKnot developers in their app. We discuss the process that we developed to create the steganalysis algorithm to detect messages hidden in PixelKnot. PixelKnot embeds in the jpeg domain, which requires different techniques for steg detection. The experiment we created for steg detection produces an estimate of the error rate of the classifier based on the data used to train the classifier and the training algorithm itself. We discuss the standard measurement of error for steg detection systems, including ours, as well as other characterizations of standard steganalysis algorithms and types of errors possible. We also discuss the general problem of handling steg detection in the wild, which is called open set classification, and the difficulty of accurately detecting stego images in context of current detection algorithms. We present some examples using our dataset that show as current errors are defined in steganalysis, error rates can be reduced using certain classes of data. Since there are apparently no works to steg-detect images generated from these readily-available mobile apps, we hope to initiate discussion on forensic tools that could fill this gap.
     
  • Anthony Onorato
    FBI Laboratory
    Genotyping Errors in the FBI STR Allele Frequency Database Used for Estimating Match Probabilities in Forensic Investigations
    In preparation for expansion of the DNA markers required for entry of profiles into the National DNA Database, the FBI Laboratory retested its reference population database samples with updated DNA typing kits in order to generate population frequency data for estimating profile probabilities that indicate the significance of a DNA match in a forensic case.  Results from retesting were compared with the original 1999 and 2001 DNA typing results, and a small number of genotyping differences were observed. Upon investigation, the FBI determined these differences were mostly due to clerical mistakes, limitations of old technologies and software, and sample duplications.  In sum, 33 out of over 1100 DNA profiles (< 3%) showed discrepancies at one or a few loci in the profile. A thorough review of the historical and contemporary data was undertaken.  In order to assess the impact of these errors, the FBI Laboratory partnered with recognized experts in the field of forensic DNA testing and population statistics for an assessment which revealed that probabilities calculated using the original and amended frequency data are no more than two-fold different, which is well within the established ten-fold variability supported by the National Research Council (1996) for match probabilities calculated using different datasets.  Based on DNA profiles spanning a range of common and rare match probabilities, the assessment concluded that the errors are unlikely to have a meaningful effect on any given case.   Several measures were undertaken to ensure the accuracy of the amended data and, given that many laboratories use the FBI population frequencies for estimating match probabilities, to disseminate information.  While statisticians are well aware that large datasets are expected to have a small number of errors, and most casework analysts understand the negligible impact that small changes in frequencies may have on reported statistics, considerable attention was given to the potential for misunderstanding or exaggeration of the impact of the errors. Within a month of concluding the assessment, an erratum describing the statistical analysis of the errors and the nominal impact on match statistics was accepted for publication.  To expedite the dissemination of information, the FBI also issued a bulletin through the Combined DNA Index System (CODIS) that included the amended dataset and a FBI point of contact.  In response, FBI scientists addressed nearly a hundred inquiries on technical, administrative and policy matters from the community.  Soon thereafter, a second CODIS Bulletin that included the expanded dataset was issued.  Several laboratories reported performing their own statistical assessments of the errors, confirming the FBI's findings.  In addition, the FBI Laboratory communicated with the American Society of Crime Laboratory Directors, accrediting bodies, and the Consortium of Forensic Science Organizations to discuss supporting these organizations in further disseminating information beyond DNA analysts to laboratory management and other stakeholders, including attorneys.  To this end, the FBI Laboratory participated in webinars focusing on both technical and non-technical audiences and made presentations at a meeting of the Scientific Working Group on DNA Analysis (SWGDAM) and at the Technical Leader's Summit held at the CODIS Conference.   Through the means described, the FBI Laboratory strove to expeditiously provide accurate detail on the origin and nature of the errors, its actions to address the matter, the statistical assessment of errors, and its policies that address statistics derived from the erroneous dataset and reported in casework.  These measures of full disclosure enabled stakeholders to comprehend and confirm the limited nature of the errors and their nominal impact on forensic match statistics, with the intent of ensuring accuracy, assuaging potential challenges and maintaining confidence in the past and present use of the FBI database.
     
  • Imani Palmer
    University of Illinois at Urbana-Champaign
    Toward Sound Analysis of Digital Evidence
    Digital forensics is a branch of forensic science practiced in the prosecution of crimes which involve digital devices. The digital forensic investigative process involves the collection, preservation, and analysis of electronic data. The goal of analysis is to draw conclusions from the evidence. Currently, the analysis phase is largely reliant on the knowledge of each individual examiner and their own specific examination process.  We demonstrate the sound analysis of digital evidence through the implementation of graph theory and probabilistic graphical models. We rely on the scientific method in order to maintain repudiation. The result of our analysis allows for us to ascertain a statistical likelihood of an occurrence of events based on evidence. We discuss and evaluate our approach through an examination of a case study.
     
  • Sandra Rodriguez-Cruz
    U.S. Drug Enforcement Agency
    Assessing the Uncertainty of Net Weight Measurements Throughout the Drug Enforcement Administration (DEA) Laboratory System

    When an item of evidence is submitted to a seized-drug laboratory, one of the first tests performed is the determination of net weight; that is, the weight of the actual material, not including wrappers or containers.  These measurements are critical as they can significantly influence or outright determine sentencing outcomes.  As such, laboratories should have procedures in place to evaluate the factors contributing to the variability of net weight determinations to accurately assess, document, and report the uncertainty associated with any net weight measurement.  This presentation will summarize and discuss the policy and procedures implemented throughout the DEA laboratory system. As a system comprised of eight separate laboratories and more than 250 analysts, the DEA's methodology for estimating net weight uncertainties was developed as a "quasi-budget" approach using laboratory system-wide balance calibration and performance verification data.  This allows for the assessment of the total variability expected across all laboratories, including different weighing equipment, users, environments, reference weight standards, and weighing procedures, among other factors.  As expected, the uncertainty associated with a particular net weight measurement is also significantly dependent on how the measurement was performed and how the total net weight was calculated.  Was it obtained by weighing all items directly and subtracting the weight of the containers/wrappings?  Or was it obtained by weighing a small sample of items and calculating the total net weight via some type of extrapolation?  Development of the DEA uncertainty policy also included revision and standardization of net weight determination procedures, to ensure consistency in their application while minimizing the effects of high uncertainty influences. Successful implementation of the DEA net weight uncertainty policy also involved the training of analysts and laboratory managers.  Training sessions included background information on statistics and metrology, the rationale behind DEA policy revisions, and the importance of communicating uncertainty information to laboratory customers and triers of fact.   To facilitate implementation of the policy and standardization of net weight measurement procedures and uncertainty calculations, a DEA Uncertainty Calculator was developed and validated for required documentation and inclusion into analysis casefiles.  This calculator was designed to accommodate various weighing procedures and scenarios applicable to solids, liquids, tablets/capsules, and bio-hazardous exhibits.  The calculator also provides documentation of the type of equipment used, uncertainty factors considered, minimum weight requirements, acceptance criteria for measurements, and weighing operations needed to complete a net weight determination.  The incorporation of the DEA Uncertainty Calculator into the laboratory information management system (LIMS) will also be discussed. To conclude, this presentation will also review numerous insights gained through implementation of the net weight uncertainty policy within the DEA laboratory system.  Emphasis will be on the importance of appropriate balance calibration procedures, proper balance usage, traceability to reference weight standards, and robust performance verification protocols.

    co-presented with Sandra SachsDrug Chemists are Getting the Right Answers:  Assessing Drug Analysis Error Rates in Municipal, County, and Federal Laboratories
    What can decades-worth of proficiency test (PT) data, quality assurance (QA) measures, re-analysis results, an Excel spreadsheet and a pot of coffee lead to? The conclusion that drug chemists are doing a very good job at identifying materials submitted to forensic laboratories for analysis. Do errors occur? What is the error rate? How confident should a laboratory be in its results? These and many other questions can be answered by using data that may already be available to laboratories, allowing assessment of error rates and in some cases, the use of Bayesian analysis to formulate posterior probabilities characterizing the confidence and uncertainty associated with drug analysis procedures and the test results they produce. This presentation will answer these questions with data obtained from one municipal, one county, and eight federal laboratories and address differing analytical approaches in these laboratories. The forensic community has increasingly been asked to provide data to support the accuracy and validity of their results (1-2).  One way to address this in the drug analysis discipline is to use PT data to assess how analysts perform while employing various analytical schemes.  Large laboratory systems, like the Drug Enforcement Administration (DEA), produce extensive data sets that provide statistically meaningful results.  This presentation will describe data obtained from DEA PTs completed during 2005-2016 and including over 4700 outcomes.   Similar studies, however, are not feasible for small laboratories.  For example, the Oakland Police Department (OPD) Criminalistics Laboratory has taken and passed  87 PTs over the last 20 years with no failures.  Such small data sets are insufficient for the assessment of errors, and could lead to "zero-error-rate" conclusions that would be wrong or misleading, at best.  However, the OPD has a robust 20-year-old QA program through which it has collected and analyzed casework-derived Quality Control (QC) samples from over 3300 exhibits.  Also included are results from the re-analysis of over 1350 cases, originally analyzed during 2002-2007 by the Kern Regional Crime Laboratory (KRCL) in Bakersfield, CA.  Although the analytical schemes employed by these federal, county, and municipal laboratories are different, estimated error rates using the different assessment approaches are all found to be below 1%. Additional information can be obtained by combining error-rate results and Bayesian analysis to estimate posterior probabilities associated with positive identification results.  By making reasonable assumptions about prior probabilities relevant to the population, high confidence (> 99%) and low uncertainty (< 1%) estimates are obtained. Lastly, error rates estimated from the analysis of OPD and KRCL data also confirm the reliability of analytical schemes employing microcrystalline testing, as these tests were used in the majority of original case analyses.  Error rates below 0.5% (6/3552 to date) are obtained, in agreement with the estimated error rates for other laboratories in this study.  These results support the use of analytical schemes employing two Category-B and one Category-C technique, the minimum recommended by the SWGDRUG (3)  since 1999. In summary, this study indicates that metrics and QA tools such as PT data and re-analysis support the validity and reliability of individual methods or analytical schemes used by drug analysts. (1) "Strengthening Forensic Science in the United States:  A Path Forward" National Academy of Sciences, 2009, Recommendation #3 (2) "Report to the President Forensic Science in Criminal Courts:  Ensuring Scientific Validity of Feature-Comparison Methods" Executive Office of the President President's Council of Advisors on Science and Technology (PCAST) September 2016.  Feature Comparisons can apply to data interpretation methods used in Drug Analysis. (3) Scientific Working Group for the Analysis of Seized Drugs; Recommendations available via http://swgdrug.org/approved.htm.
     

  • Ivana Sesardic
    NSW Forensic & Analytical Science Service
    Implementation of Blind Reviewing in a Forensic Biology Laboratory
    The 2016 report issued by the President's Council of Advisors on Science and Technology (PCAST) and scholarly articles such as those by Professor Gary Edmond [1] are heightening awareness in the forensic community in regards to the risks of cognitive bias. Bias is a natural human factor and it is difficult to estimate or measure whether it has occurred, therefore it is important to have procedures and processes in place that mitigate the potential for bias. The Forensic & Analytical Science Service comprises of a high-throughput DNA laboratory, which provides Forensic DNA testing services to the state of New South Wales. The Forensic Biology Case Management Team provide hundreds of Expert Certificates on a monthly basis which undergo an independent technical review.   Traditionally, this was a linear process and the reviewer had visibility of the interpretations and conclusions of the reporting scientist. This procedure is open to potential biasing effects, particularly confirmation bias. In order to reduce bias, the technical review process was modified to ensure the reviewer was blind to any other interpretations. Following the review, a reconciliation process occurs. Any discrepancies or inconsistencies between the reporter and reviewer are addressed. Procedures to deal with non-consensus have been implemented. The documentation of the process is available to the court ensuring transparency of any inconsistencies or limitations in the interpretation. A pilot trial was carried out over a 6 month period involving hundreds of cases and this proved to be successful.  This has led to the recent extension of the review process to a wider range of cases. The challenges and change management principles involved in the implementation of this blind independent review process will be discussed, along with the benefits. This initiative has led to a significant enhancement in the value of the technical review process which is of benefit to the presentation of evidence in court. 1 Edmond G, 2014, 'The 'Science' of Miscarriages of Justice', University of New South Wales Law Journal, vol. 37, pp. 376 - 406
     
  • Carolyn R. Steffen
    National Institute of Standards and Technology
    Lessons Learned From the Characterization of a Large Set of Population Samples: Identifying and Addressing Discordance
    A set of over 1000 DNA extracts has been extensively characterized at NIST.  These samples, known colloquially as NIST 1036, are largely male and have corresponding self-identified ancestries.   This set has been typed with 21 commercial and 3 in house PCR multiplex assays, serving as a benchmark to determine assay performance, marker diversity, and concordance between methods/assays. A primary result of this effort is the reporting of autosomal STR allele frequencies as a function of locus and population group [1].  These allele frequencies support the calculation of matching probabilities and likelihood ratios.  As new markers and assays are adopted by the forensic DNA typing community, NIST 1036 is retyped and updated to maintain a comprehensive set of information.  For example, the set has recently been tested with the new generation of STR kits containing the CODIS core 20 loci and sequence-based STR kits capable of higher levels of multiplexing [2-3]. One of the potential outcomes of ongoing testing is discordance:  when a reported genotype differs between assays.  This can result from varying PCR primer design coinciding with sample- or population-specific mutations in flanking regions, and would be observed as a change in zygosity or shift in allele call.  The goal of NIST 1036 is to reflect the genotype of the individual rather than a kit-specific anomaly, and Sanger along with Next Generation Sequencing are useful tools to resolve such discrepancies [4-5].  Discordance resulting from laboratory or data analysis errors can also be identified through repeated testing of the same sample set with assays targeting overlapping loci.  Overall, concordance evaluation is a safety net that improves data quality and reduces errors inherent to large data sets. In this presentation, we will illustrate examples of discordant autosomal STR allele calls for NIST 1036 that have recently been discovered via concordance testing.  Four sources of discordance were determined: 1) primer design differences, 2) laboratory error, 3) data analysis error, 4) change in reporting of trialleles.  Six instances of discordance affect core loci: D5S818 (2x), D7S820 (1x), D13S317 (1x), and TPOX (2x).  An additional 13 instances of discordance affect the non-core loci Penta D, Penta E, and D6S1043.  The corresponding changes to allele frequencies and effect on the magnitude of RMP will be discussed. References [1]    Hill, C.R., Duewer, D.L., Kline, M.C., Coble, M.D., Butler, J.M. (2013) U.S. population data for 29 autosomal STR loci. Forensic Sci. Int. Genet. 7: e82-e83. [2]    Oostdik, K., Lenz, K., Nye, J., Schelling, K., Yet, D., Bruski, S., Strong, J., Buchanan, C., Sutton, J., Linner, J., Frazier, N., Young, H., Matthies, L., Sage, A., Hahn, J., Wells, R., Williams, N., Price, M., Koehler, J., Staples, M., Swango, K. L., Hill, C., Oyerly, K., Duke, W., Katzilierakis, L., Ensenberger, M. G., Bourdeau, J. M., Sprecher, C. J., Krenke, B., Storts, D. R. (2014) Developmental validation of the PowerPlex((R)) Fusion System for analysis of casework and reference samples: A 24-locus multiplex for new database standards. Forensic Sci. Int. Genet. 12: 69-76. [3]    Gettings, K. B., Kiesler, K. M., Faith, S. A., Montano, E., Baker, C. H., Young, B. A., Guerrieri, R. A., Vallone, P. M. (2016) Sequence variation of 22 autosomal STR loci detected by next generation sequencing. Forensic Sci. Int. Genet. 21: 15-21. [4]    Kline, M.C., Hill, C.R., Decker, A.E., Butler, J.M. (2011) STR sequence analysis for characterizing normal, variant, and null alleles.  Forensic Sci. Int. Genet. 5: 329-332. [5]    Gettings KB, Aponte RA, Kiesler KM, Vallone PM. (2015) The next dimension in STR sequencing: Polymorphisms in flanking regions and their allelic associations. Forensic Sci. Intl. Genet. Supplement Series 5: e121-e123.
     
  • Harish Swaminathan
    Boston University
    Parameterization of an In Silico DNA Pipeline with Laboratory-Specific Experimental Data Allows for Efficient Validation of the DNA Analysis Process
    The DNA analysis pipeline in a forensic laboratory results in DNA fluorescence data that is used to compute the strength of evidence. Typically, the statistic used to convey evidential strength takes the form of a Likelihood Ratio (LR). The DNA data, typically, consists of allele signal, artifacts (such as stutter) and background signal. In the case of electropherogram (EPG) data, an Analytical Threshold (AT) is typically applied to the EPG to demarcate the signal level at which allele signal is substantively different from background. The application of an AT, like any classification system, has errors associated with it – False Positive or Type I errors in which noise peaks are mislabeled as allele peaks and False Negative or Type II errors in which allele peaks are not labeled, resulting in dropout. These errors are then propagated to the match statistic, which may ultimately affect downstream interpretation. Previous work on the subject has demonstrated that the baseline increases with template mass, is color dependent and may be set in accordance with the laboratory's risk assessment [1]; thus, in order to ensure that the maximal amount of allelic signal is utilized during interpretation, knowledge about the optimal values for the various parameters used within the DNA pipeline, such as the AT, number of PCR cycles, injection time, etc. is desirable to minimize the false positive and the false negative signal error detection rates. To this end, we have devised a computational system that simulates the EPG generation process in a laboratory-specific manner. As an input to the system, a large number of single source profiles of known genotype are provided by the laboratory. From these data, the distribution of the peak heights at noise positions is modeled as a function of the starting template amount. The electrophoresis sensitivity, which is used to generate the signal distribution, is also acquired from the single source experimental data procured from the laboratory. Values for parameters such as the number of PCR cycles, injection time, starting template mass, etc. are also used as input. This enables the simulation of a large number of artificial EPGs, corresponding to the laboratory's protocol, where noise is simulated at any desired template amount and the signal is simulated from a single copy. From the simulated EPGs, the false positive and the false negative detection error rates are estimated at each signal threshold (say in the range of 1 RFU -150 RFU). This information enables the laboratory to investigate the impact on the detection error rates of modifying the AT and to choose an AT that minimizes either the false positive rate or the false negative rate or both. An additional benefit of this system is that it also allows the exploration of values for laboratory parameters such as the number of PCR cycles and the injection time that enhance the separation of signal from noise. 1. U.J.Monich et al., Probabilistic characterization of baseline noise in STR profiles, FSI: Genetics, 19 (2015) 107-122.
     
  • Nicholas Tiscione
    Palm Beach County Sheriff's Office
    Mitigating Errors and Establishing Priorities Using Case Management Policies in a DUI Lab
    Workplace stress and employee well-being have been identified as factors that can impact the quality of forensic analyses. Many workplaces, including forensics, share common causes for stress including workload volume, tight deadlines, changing priorities, and unrealistic job expectations.  In the field of forensic toxicology, movement towards requirements for laboratory accreditation and analyst certification, as well as recommendations for common standards for method validation and scope of services have created additional stress on existing resources.  While some of these requirements are both beneficial and necessary, laboratory management policymakers should carefully consider how the additional stress on resources may impact overall quality.   Standard practices when validating methods in forensic toxicology were published in 2013 by the Scientific Working Group for Forensic Toxicology and outlined a rigid set of required experiments and acceptance criteria.  Although valuable, complying with the method validation standard practices as written can be difficult as they do not account for reasonable exceptions that may need to be considered for specific applications.  The time and expense of method development and validation can be challenging and significant to address; especially for the analysis of the rapidly evolving list of novel psychoactive compounds.  Several months or more are commonly required to complete the process and it is often extended due to simultaneous casework demands on the analyst.  This amount of time can be significant when addressing novel compounds as the specific list of compounds may be entirely different in a few months' time. In addition, new technology is developed at an increasingly rapid rate and is often required to detect new or novel compounds or to make processes more efficient.  However, this technology may be exceedingly expensive and even when funds can be procured, laboratories must devote considerable analyst time, often a year or more, to validation of the instrumentation along with specific method development and validation. In 2013 the National Safety Council's Alcohol, Drugs and Impairment Division published recommendations for forensic toxicology laboratories that conduct testing for driving while impaired by drugs cases.  The intent of these recommendations was to establish a uniform scope of testing in these cases.  One aspect of the recommendations was to perform drug testing on all driving under the influence (DUI) cases regardless of the concentration of substances (e.g. alcohol above a per se threshold) that were identified.  The justification of this specific point in the recommendations was twofold.  First that drug prevalence studies that had been conducted indicated that a large number of drivers were positive for drugs both with and without alcohol.  Second that there was a need to gather statistical information on drug use by drivers.  Proponents of the recommendations have used them to advocate for more resources to move toward performing wholesale drug testing on DUI cases.  Unfortunately drug prevalence studies do not have sufficient data to support the recommendation for blanket drug testing.  Recent studies that have used quantitative drug data have demonstrated that for the vast majority of cases the drug results are not meaningful for the case when considered with the concurrent alcohol concentration.  Included in this research an estimate on the cost of implementing drug testing on all specimens submitted for impaired driving cases compared to the common practice of performing drug testing on cases with an alcohol level of less than 0.1 g/dL was performed.  This estimate indicated that at a minimum the laboratory staffing and materials budget would need to be doubled. Establishing priorities and devising effective case management policies is essential to minimize errors caused by overwhelming the resources of laboratories.  This is true regardless of how many additional resources may be obtained.  Each laboratory should carefully consider the laws in their jurisdiction, the dynamics of their population, and their available resources when considering their case management policies.  This will enable the laboratory provide the most useful, objective, and timely high quality analysis for the criminal justice system while meeting new standards of practice.
     
  • Richard Torres
    The Legal Aid Society
    Is Source Code Speech Under the Confrontation Clause?
    Bullcoming v. New Mexico, Melendez-Diaz v. Massachusetts and Williams v. Illinois address the question of whether a defendant has the right to question a person who observes a forensic evidence examination.   Nowadays, probabilistic genotyping programs such as STRmix, TrueAlllele and DNAview are performing the analyses that human DNA analysts traditionally perform by eye.  This leaves the question of what, if any, confrontation rights does a defendant have against software that perform forensic interpretations? I argue that software is speech.  Computer code is recognized as a form of speech under the First Amendment.  There is no reason that it should not be recognized as speech under the Sixth Amendment.   Defendants need to be allowed to see how computer software creates  evidence used against them in court.  Otherwise, defendants are being prosecuted with secret out of court statements.  This is at odds with our system of justice which aspires for openness and condemns Star Chamber justice.  
Created September 8, 2017, Updated April 5, 2022