CERN Accelerating science

Data acquisition challenges for the ProtoDUNE-SP experiment at CERN

The ProtoDUNE Single-Phase (SP) experiment at the Neutrino Platform at CERN (read more) is taking data after about two years of construction and commissioning. Its primary goal is to validate the design principles, construction procedures, detector technologies, and long term stability in view of DUNE (Deep Underground Neutrino Experiment) due to take data in 2025. The ProtoDUNE-SP project faced many challenges, as it represents the largest test beam as well as monolithic Liquid Argon Time Projection Chamber (LAr-TPC) yet constructed and also because of its aggressive construction and commissioning timescale. The CERN Detector Technology group has been involved in the installation and development of both the detector control system and the data acquisition system. The data acquisition system of ProtoDUNE-SP is a challenge on its own, as it requires to continuously read out data from the detector, buffer and select it and finally temporarily store it while keeping custom developments to a minimum in order to fit the tight schedule.

The ProtoDUNE-SP’s TPC is composed by a central cathode plane and six Anode Plane Assemblies (APAs). These represent 4% of a DUNE Single-Phase super-module. Each APA is formed by 2560 wires and instrumented with front-end cold electronics, mounted onto the APA frames and immersed in LAr. The cold electronics continuously amplify and digitize the signals at a 2 MHz rate. Digitized data is then transmitted by Warm Interface Boards to the DAQ system via optical fibers.

The total amount of data generated by the TPC is about 440 Gbit/s and represents the main data source. This throughput exceeds the storage capabilities of the system, even considering the 20% beam duty cycle of the extracted SPS beam. Therefore the DAQ system implements a global selection system, with a baseline trigger rate of 25 Hz during the SPS spill, and compression, in order to limit the throughput to storage. The global trigger decision is a combination of signals coming from beam instrumentation, muon tagger, and photon detectors, allowing to acquire snapshots of the detector while beam particles or cosmic rays are traversing the volume of LAr.

 

Figure 1: Block diagram illustrating the data flow in the ProtoDUNE Single-Phase data acquisition system.

Two solutions for the TPC readout have been implemented with the goal of prototyping a viable architecture for DUNE: the ATCA-based RCE and the PCIe-based FELIX.

The Reconfigurable Cluster Elements (RCE) based readout is a full meshed distributed architecture, based on networked  System-On-Chip (SoC) elements on the ATCA platform, developed by the SLAC National Accelerator Laboratory. This approach focuses on early data processing, namely aggregation, formatting, compression, and buffering embedded on tightly coupled software and FPGA design. Due to the maturity of its development, it has been chosen as the baseline solution, reading 5 APAs out of the total 6.

The alternative solution proposed by the Detector Technology group uses the Front-End LInk eXchange (FELIX) system, developed within the ATLAS Collaboration at CERN. Its purpose is to support high-bandwidth readout systems needed for the High-Luminosity LHC, while moving away for custom hardware at an as early stage as possible, and instead employ commodity servers and networking devices.

This approach requires to implement custom software to handle the data, namely selecting, packing and compressing it effectively. To this end, recent technologies have been employed to offload the CPU and obtain the required throughput and performances. In particular, the IntelⓇ QuickAssist Technology (QAT), supported in modern processors allows performing compression on a dedicated chip at a high rate. Data is beforehand reordered in CPU to maximize the compression efficiency by exploiting Advanced Vector Extensions instructions. Finally, data is transmitted between hosts through 100 Gbit/s network interfaces utilizing the Infiniband communication standard, again allowing to minimize CPU instruction for I/O.

The FELIX-based TPC readout system has been successfully employed during data taking, and it has now been chosen as baseline readout system for DUNE.

 

Figure 2: Racks hosting the ProtoDUNE-SP Trigger and Data Acquisition system. The system is connected to the on-detector electronics through up to 300 optical fibers. A local temporary storage of 700 TByte is also available.

After two months of data taking with beam particles between September and November 2018, ProtoDUNE-SP is now acquiring cosmic ray data and carrying out many important detector performance studies in view of DUNE.

The ongoing DAQ developments include the integration into the readout of hit finding, enabling for trigger primitive generation and self-triggering of the APA modules. Software and firmware based hit finding are being implemented in parallel and will be tested side-by-side.

Also, a novel storage technology will be tested, with the goal of offering a solution for data buffering before event building in DUNE. This R&D project, hosted by the Detector Technology group and sponsored by Intel, is being carried out in collaboration with CMS and ATLAS.

The ProtoDUNE-SP experiment at CERN validated the construction and operation of a very large LAr-TPC, from the mechanics to the cryogenic system, from the installation procedures to the detector modules construction, from the instrumentation and control to the detector electronics and data acquisition system. ProtoDUNE-SP is now playing a crucial role in the in-depth validation of all the experiment's components and offers a unique playground for prototyping and refining the appropriate solutions for DUNE.

The ProtoDUNE-SP DAQ is undergoing significant developments to transform itself towards a high availability continuous data taking system with self-triggering capabilities on the TPC data.