CERN Accelerating science

Trigger strategies for the LHCb experiment

by Michel De Cian (EPFL), Agnieszka Dziurda (IFJ PAN), Conor Fitzpatrick (CERN), Vladimir Gligorov (CNRS/LPNHE), Sascha Stahl (CERN)

The LHCb experiment has finished a very successful data taking period in Run 2, collecting an additional 6 inverse femtobarns of proton-proton collisions. A significant modification to the Reconstruction and Trigger system of LHCb was implemented during the first LHC long shutdown period that both enhanced the physics goals of LHCb and served as a testbed for an even more ambitious upgrade prior to Run 3.

Figure 1. Overview of the LHCb trigger system

The Run 1 LHCb trigger consisted of a hardware level-0 stage that reduces the rate at which the detector is read out to 1 MHz. After this hardware stage events were processed by a two-stage higher level trigger (HLT) in software running on the Event Filter Farm, consisting of commodity servers. The first stage of the HLT performed inclusive selections on one- and two- track signatures using a subset of the information, followed by a second stage that performed more exclusive selections with access to a fast reconstruction of the entire event. After selection by the trigger, events were stored offline where they were further processed using the highest quality reconstruction taking into account subdetector specific alignments and calibrations. Although this trigger enabled the full LHCb physics programme, fuller exploitation of low-momentum charged particles at the first stage and full particle identification at the second stage would enhance the performance for c-hadron physics in particular.

For Run 2, the first and second stages of the HLT were split into separate applications and a large (∼11 PB) disk buffer taking advantage of the available disk space on the event filter farm servers was added. The combination of the disk buffer and separate software trigger stages had two significant advantages: By storing the output of HLT1 to disk, HLT2 could be run asynchronously, meaning that the event filter farm could now be used exclusively for HLT2 processing during LHC inter-fill periods and technical stops. As the LHC collides protons a little over 50% of the time including inter-fill gaps, technical stops and machine development periods, this effectively doubled the available processing resources for the trigger compared to running HLT2 synchronously.

This allowed momentum thresholds to be reduced and significantly increased the efficiency for charm physics. It also allowed alignment and calibration of the subdetectors to be performed on the output of HLT1, prior to HLT2 running: The alignment provides the best known position and orientation of the detector elements, whereas the calibration applies necessary corrections to the detector element response. In order not to impact physics measurements, the accuracy of these corrections must be significantly better than the detector resolution. Alignment and calibration quantities may vary over time for several reasons (for example pressure and temperature changes, automatic or manual movements of the detector, and as LHCb frequently switches the dipole magnet polarity), therefore these quantities need to be re-evaluated frequently.

During normal data taking in Run 2, dedicated data streams are recorded for detector alignment and calibration purposes and sent to a portion of the farm where automated procedures evaluate the current alignment and update it if necessary. Once sufficient data is collected these procedures mark the HLT1 processed data as ready for processing by HLT2. Most of the reconstruction algorithms used in Run 1 were reoptimised for Run 2 in terms of both physics performance and to reduce where possible their computing resource requirements. In addition, the Run 2 reconstruction sequence incorporated additional machine learning algorithms to reduce the number of tracks not used for analysis as early as possible in the reconstruction sequence to minimise unnecessary resource usage at later stages. The performance of the Run 2 trigger is described in [1]. When comparing the reconstruction sequences of Run 1 and 2 an overall performance gain of a factor of two in timing was obtained without any loss in performance. Thanks to these improvements and the increased available CPU time made available to HLT2 from the disk buffer the best quality, ’offline-like’ Run 1 reconstruction could now be performed at HLT2.

The improved HLT reconstruction and real-time alignment and calibration removed the need for a dedicated reconstruction step offline, saving resources for simulation production and user analysis. It also allowed analysts to store only the subset of information needed for analysis rather than the full event that would be required for further reconstruction, substantially reducing the event size[2] [3]. This new paradigm greatly increases the efficiency with which signals can be selected, and allows higher rates to be saved for analysis, broadening the physics programme of the LHCb experiment.

As an example the efficiency of the HLT1 inclusive trigger lines for a subset of representative beauty- and charm-hadron decays are shown in Fig. 2.

Figure 2. Efficiency of the HLT1 inclusive trigger for a subset of representative beauty (left) and charm (right) decays as a function of transverse momentum.

The Run 2 trigger and reconstruction model has been adopted as the baseline for the LHCb Upgrade, taking place over the current Long Shutdown ready to take data in Run 3. As part of this upgrade, the 1 MHz readout limitation is being removed along with the level-0 trigger, resulting in a trigger-less readout of LHCb events into a fully software trigger and reconstruction operating at the LHC bunch crossing frequency. The online alignment and calibration of LHCb data, its full reconstruction, and the subsequent use of reduced event formats together form the Real-Time Analysis paradigm.


Further Reading

[1] CERN-LHCb-DP-2019-001

[2] LHC experiments upgrade trigger plans

[3] CERN-LHCb-DP-2019-002