featuring the Systems-on-a-Chip (SoCs) for Space Workshop!
(SEE Symposium featuring SoCs for Space Workshop)
May 11 – 15, 2026
Marriott La Jolla
San Diego, California
2026 SEE/SoCS Tutorial
Did I Make the Right Decision?
Best practices for gathering and analyzing FPGA/SoC SEE Data for risk-informed decision making
Chaired by: Codie Mishler, Northrop Grumman Corporation
This tutorial is intended to aid:
- Semiconductor manufacturers in providing data for their customers,
- Test organizations looking to provide high-quality data within budgetary constraints, and,
- Flight projects making those risk-informed decisions on selection and application of these enabling technology devices.
Title: Why We Test for Single Event Effects (SEEs)?
Kenneth LaBel, Trusted Strategic Solutions, LLC.
There’s a common misunderstanding that exists about SEEs: all test data is equivalent. This is far from the truth. In this module, we provide guidance on determining the reason/purpose/goal that a test is being undertaken as well as discussing the implications related to the reason/purpose/goal and how it affects the data (or requirements for data) that is gathered. During this module, we’ll utilize a following framework for discussing SEE test reason/purpose/goal.
Title: Calculating Uncertainty in Upset Rates
Dave Hansen, L3 Harris
While calculating the error bars in single event effects data is typically treated as a straight-forward exercise in statistics, However, the error bars on a single event rate are typically ignored. In this talk Dave Hansen will address sources of uncertainty in single event rates. Included in the talk will be a discussion of curve fitting, environmental and model error. In some cases these are simple, in others they are unmanageable. The talk will focus less on the statistics and more on useful rules of thumb to help you make sense of your analysis and guide your test matrix when you have beam time.
Title: Seeing the Unseeable: Experimental Visibility of Complex Systems
Seth Roffe, Shift5.io
In the coming age of commercial-off-the-shelf (COTS) systems-on-chip (SoCs), radiation engineers need to design more and more radiation tests for these complex devices. However, due to the many sub-components of these devices, visibility into the effects encountered during a radiation test can be difficult. Similarly, the lack of debugging or experimentation tools designed around COTS components prevents engineers from having insight into the details of their system. Instead, the radiation tests we design must work around these limitations, and through the use of data collection and clever experimental design, we can narrow down the underlying effects enough to draw proper conclusions.
Title: SEE Response of SOCs and Matching Observations to Underlying SEEs
Steven Guertin, JPL
Modern SOCs complicate SEE testing because of their heterogeneous architectures, configuration-dependent responses, and fault-tolerant features. This can cause beam tests—already modeled and accelerated versions of real flight environments—to diverge from true on‑orbit performance. Radiation test engineers use approximate applications and configurations that almost never match the flight application, and never match the actual environment. This tutorial will explore the relationship between how a device is operated during testing, the observed test results, and how to generalize results to other potential applications the customer may need in the future. We will describe how to connect library- and subsystem-level SEE sensitivity to device- and application-level results. By explaining how to connect best-, worst-, and nominal device response to collected SEE data, radiation engineers can understand the quality of the data at the time of collection and be able to give customers a range of expected SEE rates for application changes. By using this type of approach, SEE test data can be shared in a way that is more useful for the wider community.
Title: Better Sets of SEE SoC-FPGA Data: A Checklist Approach for Test Plans and Reports
Kenneth LaBel, Trusted Strategic Solutions, LLC.
As discussed in Module 1, not all tests are created equal (i.e., they don’t have the same reason/purpose/goal). In Module 5, we present a tailorable checklist approach for planning and reporting SEE data. The basic idea is two-fold: identify the completeness of test performed and define the limitations therein. This allows the reader of the report to understand more appropriately how they may apply the data. At the end of the module, the participant should be able to answer a set of important questions for a SEE test report.
Contact Us
Justin Likar, JHU/APL, General Chair
Ian Troxel, Troxel Aerospace Industries, Inc., SoCS Chair
George Tzintzarov, The Aerospace Corporation, SEE Technical Chair
Codie Mishler, Northrop Grumman, Tutorial Chair
Martha O’Bryan, TSS, Poster Session Chair
Larisa Milic, EMPC, Exhibits Chair
Teresa Farris, Archon, LLC, Meeting Planner
Carl Szabo, CS Enterprises, A/V Chair
