N3B1  Data Treatment Techniques

Wednesday, Nov. 4  10:30-12:30  Town and Country

Session Chair:  Maria Grazia Pia, INFN Genova, Italy; Tiehui Liu, Fermilab, United States

Show/Hide All Abstracts

(10:30) N3B1-1, Systematic Uncertainties in High-Rate Germanium Data

A. J. Gilbert, J. E. Fast, B. G. Fulsom, W. K. Pitts, B. A. VanDevender, L. S. Wood

Pacific Northwest National Laboratory, Richland, WA, USA

For many nuclear material safeguards inspections, spectroscopic gamma detectors are required which can achieve high event rates (in excess of 106 s-1) while maintaining very good energy resolution for discrimination of neighboring gamma signatures in complex backgrounds. Such spectra can be useful for non-destructive assay (NDA) of spent nuclear fuel with long cooling times, which contains many potentially useful low-rate gamma lines, e.g., Cs-134, in the presence of a few dominating gamma lines, such as Cs-137. Detectors in use typically sacrifice energy resolution for count rate, e.g., LaBr3, or visa versa, e.g., CdZnTe. In contrast, we anticipate that beginning with a detector with high energy resolution, e.g., high-purity germanium (HPGe), and adapting the data acquisition for high throughput will be able to achieve the goals of the ideal detector. In this work, we present quantification of Cs-134 and Cs-137 activities, useful for fuel burn-up quantification, in fuel that has been cooling for 22.3 years. A segmented, planar HPGe detector is used for this inspection, which has been adapted for a high-rate throughput in excess of 500k counts/s. Using a very-high-statistic spectrum of 2.4×1011 counts, isotope activities can be determined with very low statistical uncertainty. However, it is determined that systematic uncertainties dominate in such a data set, e.g., the uncertainty in the pulse line shape. This spectrum offers a unique opportunity to quantify this uncertainty and subsequently determine required counting times for given precision on values of interest.

(10:50) N3B1-2, Kalman-Filter-Based Particle Tracking on Parallel Architectures at Hadron Colliders

G. Cerati1, M. Tadel1, F. Wuerthwein1, A. Yagil1, S. Lantz2, K. McDermott2, D. Riley2, P. Wittich2, P. Elmer3

1Dept. of Physics, University of California, San Diego, San Diego, CA, USA
2Dept. of Physics, Cornell University, Ithaca, NY, USA
3Dept. of Physics, Princeton University, Princeton, NJ, USA

Power density constraints are limiting the performance improvements of modern CPUs. To address this we have seen the introduction of lower-power, multi-core processors such as GPGPU, ARM and Intel MIC. To stay within the power density limits but still obtain Moore's Law performance/price gains, it will be necessary to parallelize algorithms to exploit larger numbers of lightweight cores and specialized functions like large vector units. Track finding and fitting is one of the most computationally challenging problems for event reconstruction in particle physics. At the High-Luminosity LHC (HL-LHC), for example, this will be by far the dominant problem. The need for greater parallelism has driven investigations of very different track finding techniques such as Cellular Automata or Hough Transforms. The most common track finding techniques in use today, however, are those based on the Kalman Filter. Significant experience has been accumulated with these techniques on real tracking detector systems, both in the trigger and offline. They are known to provide high physics performance, are robust, and are in use today at the LHC. We report on porting these algorithms to new parallel architectures. Our previous investigations showed that, using optimized data structures, track fitting with Kalman Filter can achieve large speedups both with Intel Xeon and Xeon Phi. We report here our progress towards an end-to-end track reconstruction algorithm fully exploiting vectorization and parallelization techniques in a realistic experimental environment.

(11:10) N3B1-3, Model-Based, Analytical Maximum-Likelihood Deconvolution for CZT Detectors

M. J. Neuer1, E. Jacobs2

1Quality and Informationtechnology, VDEh Betriebsforschungsinstitut, 40237 Duesseldorf, Germany
2R&D, innoRIID, 41516 Grevenbroich, Germany

An analytical response function is presented that is based on a perturbed Pearson-IV model for the peak shape and additional models for the first four moments of the distribution, mean, variance, skewness and kurtosis. The parametrisation of these moment models is acquired by using an evolutionary ensemble algorithm, delivering corresponding energy dependent functions for each of the moments. Using the relationship between the moments and the actual peak shape, this analytical equation is introduced into the Maximum-Likelihood inversion, delivering a formula for deconvolving spectra. Through the parametrisation, the technique can be tailored to any CZT detector. Without having to store a large response matrix, the analytical deconvolution can be easily deployed also on smaller devices, with less computational performance. The technique is validated on a Raspberry Pi connected to a muSPEC500 device from Ritec, featuring a 300(mm^3) CZT crystal. Example measurements for cesium (137Cs), cobalt (60Co), europium (152Eu) and uranium (238U) are deconvolved and show a significant capability of resolving peaks.

(11:30) N3B1-4, A Block-Eliminating Method by Limited-View Scan in a Dynamic CT System for Running Aero-Engine

F. Han1,2, Y. Xiao1,2, M. Chang1,2

1Key Laboratory of Particle & Radiation Imaging, Tsinghua University, Beijing, China
2Department of Engineering Physics, Tsinghua University, Beijing, China

In order to detect the deformations and monitor inner structures during operation more effectively, an innovative CT system making use of the objects’ self-rotation is designed by us. However, some parts remain static and they contaminate the reconstruction image with ring artifacts. In this paper, we put forward a method taking advantage of limited-angle scan to get the prior image, then using interactive image registration and forward projection to eliminate these artifacts. Both numerical and physical experiments are done to verify the feasibility of the method.

(11:50) N3B1-5, System-Independent Dual-Energy Computed Tomography for Characterization of Materials

I. M. Seetho1, S. G. Azevedo2, H. E. Martz1, M. B. Aufderheide2, W. D. Brown1, K. M. Champley1, J. S. Kallman1, P. Roberson2, D. Schneberk2, J. A. Smith2

1LLNL Nondestructive Characterization Institute, Livermore, CA, United States
2Lawrence Livermore National Laboratory, Livermore, CA, United States

In a recent project, we explored properties of materials and of Dual-Energy CT (DECT) scanners to find a system-independent X-ray feature space that allows comparison between scanners. Such a feature space is needed in order to quantitatively characterize materials for nondestructive evaluation (NDE), security and medical applications. DECT produces linear attenuation coefficients which can be used to estimate electron density (?e) and effective atomic number of materials. Even with careful measurement protocols and calibration, these estimates from different DECT scanners can differ when evaluating the same specimen material. Also, single-scanner results can disagree as thermal drifts or age cause spectral response changes. We propose a new technique for DECT processing that calculates images of ?e and an alternative definition effective atomic number, Ze, based on X-ray cross sections. The technique, called System-Independent ?e/Ze, or SIRZ, performs a photoelectric-Compton decomposition of LAC data using system spectral response estimates. Feature space ground truth values of concurrently scanned reference materials are compared with measurements to solve empirically for constants in approximation formulas relating photoelectric and Compton attenuation to Ze and ?e, as a form of system calibration. The formulas are then used to solve for Ze and ?e of the specimen material. This process serves to remove system variability from observed measurements. In our experiment, eight different specimens ranging in Ze from 6 to 20 were analyzed with two physical DECT systems, one of which was operated under five different source spectra to yield four distinct high-low spectral pairs. For these materials, we computed Ze and ?e results matching ground truth values to within 2% in precision and 3% in accuracy, significantly better than existing DECT methods in use, which show errors of over 9% for the same data.

(12:10) N3B1-6, Short-Term Gamma Background Anticipation Using Learning Gaussian Processes

M. Alamaniotis, C. K. Choi, L. H. Tsoukalas

School of Nuclear Engineering, Purdue University, West Lafayette, IN, USA

The use of machine intelligence for gamma-ray background radiation is discussed and the test results on experimentally obtained spectra are presented. In this paper, a Gaussian process is adopted for learning recent measurement and subsequently anticipating the next background measurements n a short ahead of time horizon. Anticipated values are compared to respective incoming measurements and a decision is made whether an alarm should be raised. The proposed approach is tested on a set of real experimental gamma-ray measurement taken with a low resolution detector.