N2B4  Data Acquisition & Control Systems

Tuesday, Nov. 3  10:30-12:30  Pacific Salon 1&2

Session Chair:  Giovanna Lehmann Miotto, CERN, Switzerland; Patrick Le D?, IPNL, IN2P3,

Show/Hide All Abstracts

(10:30) N2B4-1, A Scalable DAQ System with High-Rate Channels and FPGA- and GPU-Trigger for the Dark Matter Search Experiment EDELWEISS-III

T. Bergmann1, M. Balzer1, D. Bormann1, S. A. Chilingaryan1, K. Eitel2, M. Kleifges1, A. Kopmann1, V. Kozlov2, A. Menshikov1, B. Siebenborn2, D. Tcherniakhovski1, M. Vogelgesang1, M. Weber1

1IPE (Institute for Data Processing and Electronics), KIT Karlsruhe Institute of Technology, Karlsruhe, Germany
2IKP (Institute for Nuclear Physics), KIT Karlsruhe Institute of Technology, Karlsruhe, Germany

Dark Matter search is currently one of the most emerging fields in particle physics. The EDELWEISS experiment, located in the underground laboratory LSM (France), is one of the worldwide leading experiments using cryogenic Ge detectors to search for Dark Matter. EDELWEISS started with three Ge detectors totalling a mass of 0.96kg and increased to 12 detectors (total of 4kg) in 2009. The actual phase, EDELWEISS-III, uses up to 36 detectors totalling 28.8kg. 'Bolometer boxes', each containing six 16-bit-ADCs, digitize the detector signals (2 heat, 4 ionization signals) at a sample rate of 100 kHz and stream it to PCs; the event trigger is implemented in software. For the EDELWEISS-III phase, a new scalable DAQ system was designed and built, based on the 'IPE V4 DAQ system', which is already used for several experiments in astroparticle physics. This architecture provides several advantages and improvements: 1) The new DAQ system integrates seamlessly into the existing EDELWEISS environment. 2) It improves scalability for larger detector masses with respect to number of trigger computers. 3) Unused channels can be removed from the data stream in an early state and thus reduce the data rate. 4) Additional triggers have been implemented in the input card FPGAs. 5) Parallel to the standard 100 kHz ADC sampling, two channels of one dedicated detector are sampled at 40 MHz. The 40 MHz sample readout is triggered by the FPGA trigger. 6) The crate computer can be equipped with graphic processing units (GPUs); modern GPUs provide more than thousand simple computing processors, which run in parallel and are predestined to run an additional trigger level in software; several trigger algorithms were investigated in a test setup. The new DAQ system was commissioned successfully in 2014, is now in use in the new EDELWEISS-III setup and is well prepared for future extensions of the experiment.

(10:50) N2B4-2, The Heavy Photon Search Silicon Vertex Tracker Data Acquisition System

B. Reese1, P. H. Adrian1, R. Herbst1, T. Nelson1, S. Uemura1, O. Moreno2

1SLAC National Accelerator Laboratory, Menlo Park, CA, USA
2Santa Cruz Institute for Particle Physics, University of California, Santa Cruz, CA, USA

The Heavy Photon Search (HPS) experiment at Thomas Jeffer-son National Accelerator Facility (JLab) will search for heavy photons, which are new hypothesized massive vector bosons. The experiment consists of an electromagnetic calorimeter and Silicon Vertex Tracker (SVT). The SVT Data Acquisition System (DAQ) supports readout and processing of signals from 36 silicon strip sensors of the SVT. It also selects and transfers those events that were identified by the trigger system to the JLab DAQ for further event processing at rates approaching 50 kHz. Complex front-end electronics for digitization and power distribution are deployed inside the vacuum chamber in order to reduce vacuum penetration count and shorten analog signal paths. A strong magnetic field used by the experiment also complicates the electronics design. The SLAC RCE platform is utilized upstream to process and filter raw sensor data for delivery to the JLab DAQ. The system was commissioned and tested at nominal operating conditions during a run in 2015.

(11:10) N2B4-3, Graph-Based Decision Making for Task Scheduling in Concurrent Gaudi

I. Shapoval1,2,3,4, M. Clemencic1, B. Hegner1, D. Funke1,5, D. Piparo1, P. Mato1

1CERN, Geneva, Switzerland
2KIPT, Kharkiv, Ukraine
3INFN-FE, Ferrara, Italy
4UNIFE, Ferrara, Italy
5KIT, Karlsruhe, Germany

The modern trend of extensive levels of hardware parallelism and heterogeneity pushes software to evolve through a paradigm shift towards concurrent data processing architectures. One such striking example in the domain of high-energy physics is represented by Gaudi – an experiment independent software framework, used in two of four major experiments of the Large Hadron Collider project, and in several others. The framework is responsible for event processing by means of hundreds of algorithms which have logical and data dependencies between each other. Historically, the framework was designed being inherently sequential, meaning that at any time of data processing there is only one event being processed and only one algorithm being executed on it. This allowed to respect the dependencies of algorithms by just organizing them in a well-defined execution path to be run on CPU. The evolution of the Gaudi framework into its concurrent incarnation, though, implies the necessity to split the execution path dynamically into subsets of algorithms to fill up efficiently the available computing resources. In this work we present a graph-based decision making system as a solution to the problem. The approach allows to form and control dynamically the order of concurrent algorithms' execution, restricted by the topology of their dependencies of any complexity level. Furthermore, we show the system's capability of configuration- and run-time planning for optimal resource usage, and discuss a few concrete scheduling strategies, that this approach exposes.

(11:30) N2B4-4, FPGA Based Event Building and Data Acquisition System for the COMPASS Experiment

Y. Bai1, M. Bodlak2, V. Frolov3,4, V. Jary2, S. Huber1, I. Konorov1, D. Levit1, J. Novy2, R. Salac2, D. Steffen1,4, M. Virius2, S. Paul1

1Physikdepartment E18, TU Muenchen, Garching, Germany
2Faculty of Nuclear Sciences and Physical Engineering, Czech Technical University, Prague, Czech Republic
3Joint Institute for Nuclear Research, Dubna, Russian Federation
4CERN, Geneve, Switzerland

The purely software based old data acquisition system of the COMPASS experiment at CERN is replaced by the new hybrid FPGA-software system. The two level FPGA sub- system takes over the data handling role. The hardware is built in a compact AMC form factor using a Xilinx Virtex-6 VLX130T FPGA as a data processor. The data handling includes a 15:1 data link multiplexing and complete event building in firmware. All high speed links of the system are commutated by the 144x144 channels cross point switch which is used to provide dynamic load balancing. The designed throughput of the system is 1.6 GB/s while the maximum estimated data rate of the experiment is 1.5 GB/s for the 50 kHz trigger rate. The system makes use of the accelerator spill structure for optimizing the load on the read-out PCs. The distributed software runs on a server farm and integrates control and configuration functionality. The software closely monitors data flow for consistency on all stages of the event building process to perform load balancing and error recovery. The prototype of the system was successfully tested during the commissioning of the experiment and the full system will be used in the physics run 2015.

(11:50) N2B4-5, FPGA Based Data Read-Out System of the Belle 2 Pixel Detector

D. Levit, I. Konorov, Y. Bai, S. Paul

Physikdepartment E18, TU Muenchen, Garching, Germany

The Belle 2 experiment is undergoing an upgrade with the aim to perform the most precise measurement of the CP-violation using the state-of-the-art detectors. The silicon pixel detector is the innermost layer of the experiment. Because of the increased beam background and the high detection efficiency of the silicon pixel detector, the data rate of the detector is estimated at 22 GB/s which is 10 times higher than the data rate of the remaining detectors in the experiment. We build a two stages FPGA-based data read-out system which controls the detector and prepares the data for the online data reduction. The system consists of 48 modules in AMC form factor which use Xilinx Virtex-6 FPGA and 4 GB DDR3 memory and 8 ATCA carrier boards with installed multi-channel cross-point switch for the dynamic interconnection of the FPGA modules. This setup enables the utilization of the hot spare modules which are activated in a case of module failure. The data processing in the system includes event re-assembly, cluster reconstruction and online sub-event building. To improve the quality of the data signal which is currently delivered over 15 m copper Infiniband cable, the radiation tolerance of the optical transmitters has been investigated. The paper will present the current system design and the results of the irradiation campaign.

(12:10) N2B4-6, The CALICE Silicon-Tungsten Electromagnetic Calorimeter : Production and Test of Fully Equipped Readout Units

R. Cornat

Laboratoire Leprince-Ringuet - Ecole Polytechnique / IN2P3-CNRS, PALAISEAU, FRANCE

On behalf of the CALICE collaboration

The physics potential of highly granular calorimeters for an experiment at a Lepton Collider has been demonstrated. Among several concepts of detectors, in particular those studied within the CALICE collaboration, the silicon-tungsten technology features an excellent uniformity of the signal making easier the calibration and a good stability in time. Starting from a "physics" prototype, operated between 2004 and 2011, recent prototypes are featuring up to 6000 channels per dm3 with embedded front-end electronics and power pulsing technique. A first part of the contribution gives an overview of the latest design and our plans for an application on a future linear collider. The second part focuses on test results of a fully instrumented front-end boards featuring 1024 channels. Emphasis is put on calibration data obtained with cosmic rays. The signal over noise ratio is discussed as well as uniformity of MIP calibration done on a first module and on a small batch of detector modules whose production is planned for June'15. As our concept has been selected for the upgrade of the end caps of the CMS calorimeter, a last section will quickly describe plans for CMS.