For Phase-2, CMS DAQ will undergo a major increase of its capacities to match HL-LHC performance and the related CMS physics program. Compared to the present situation (Run3), the post-trigger data throughput will increase from 2 Tb/s today to 50 Tb/s for Run4 and the processing power needed to run the high-level trigger algorithms will go from ~0.6 MHS06 today up to 37 MHS06 (trigger rate 750 kHz and pileup multiplicity of 200).
To fulfil these new parameters, the current datacentre (named OLC-1, first floor of SCX5) will be fully refurbished during LS3 (electric and cooling capacity go from 1MW today to 3MW after LS3) and a new computer room (aka OLC-R) is currently being created in place of the old CMS Control Room to increase the rack space.
As OLC-1 hosts vital functions for the whole CMS site (access controls, databases, user filers, network star point, safety related servers, etc) OLC-R must replicate those functions and be operational before the start of OLC-1 refurbishment.
OLC-R is a room of approx. 150m2 containing 36 water cooled server racks. Each rack has a 120 x 75 cm2 footprint, is 52U high (~250cm) and comes equipped with a door of cooling capacity of 50-60 kW. Overall, the room has a max power/cooling capacity of 2MW, that translates into a power density of 13.5 kW/m2. For a parallel, 10kW is enough to heat a house of 200m2 in the Geneva area!
OLC-1 will be refurbished during LS3 with the same technologies used in OLC-R. After LS3, the first-floor room will host 89 water cooled racks in ~290m2 and have up to 3MW cooling capabilities. Its power density will be ~10kW/m2.
These models of water-cooled racks are the 3rd generation of water-cooled racks present in the CMS data centre since initial installation (~2005). Generation 1 and generation 2 are respectively of 10kW and 16kW heat load capacity per rack without capabilities to adapt the fan speed nor the water flow to the internal heat load.
Generation 3 racks represent a significant step towards high-density and energy efficient data centre as each rack is capable, via local sensors, to adapt its water consumption to its actual heat load. The same rack technology is also used with great success since a few years by the ProtoDUNE experiment. As LS3 is approaching, floor space in OLC-1 is cleared from racks of generation 1-2 and some of them find a second life at CERN, being re-used in rooms where the cooling is not well adapted to dense stack of rack mounted servers.
The inside of the rack with the new cooling system. The high power density achieved by the new racks allows the full rack volume to be utilised. (Image: S.Hurst/CERN)
Along with the refurbishment of the rooms, the cooling and power infrastructure is being completely redone by EN-CV and EN-EL according to CMS requirements. In particular, the production of cooling water, that is today chilled water (hence mechanical cooling), will be achieved by free cooling. Both server rooms will use cold water produced directly by water towers that are being installed today on CMS site. This will lead to important savings on the data centres operation costs and greenhouse gas emission. Today, with chilled water, our PUE (Power Usage Effectiveness) is ~1.5 (that is already very good). Going to free cooling will bring down our PUE to ~1.1-1.2, inline with the most efficient data centres around the globe.