The end of year is traditionally the time when SFT makes new releases of our main software products. It is a very busy time with everyone working energetically to meet various deadlines. All last minute changes and fixes need to be integrated and run through extensive checks to ensure that quality of the performance of the code is not compromised in all its aspects.
As the new releases are getting finalised work on the rest of our projects as well as our R&D's is continuing. We are aiming to advance our various software tools as well as infrastructure we provide to the CERN experiments and collaborators. We strongly commit to educating and training students through schools such as the CERN School of computing, CERN's student program and external programs, such as Google Summer of Code.
As for any of our projects, a lot of valuable feedback is encouraged and expected from the user community in the coming weeks.
More details on some of our activities.
The scale deployment of the two main products of the CernVM ecosystem, the CernVM File System and CernVM Appliance, are such that about 350 million files - belonging to about 50 experiments - are distributed worldwide by CernVM-FS, and more than 10,000 new VMs instantiated every month to serve the needs of the LHC experiments. In 2016 the CernVM team released a new consolidated version of CernVM-FS which opens up the possibility of contributions from external collaborators to satisfy new emerging use cases, such as namespacing for data federations. Support for the CernVM appliance has been extended to cover all major cloud providers, both commercial (Google Compute Engine, Amazon Web Services, Microsoft Azure) and academic (Openstack, Cloudstack and Opennebula). In the beginning of June the team held the 2nd Users Workshop at RAL, UK. More than 40 registered participants attended and discussions were a good opportunity for developers, infrastructure maintainers and users to exchange ideas and setup new collaborations. Invited speakers from industry (IBM, VMware, Linaro, Mesosphere, Pivotal) gave dedicated talks on technology trends that were well received by the audience.
The new 2016 release of Geant4, release 10.3, consolidates further the code base for version 10 series of the Geant4 toolkit and introduces several new features, like the ability to now handle multiple actions of the same kind from the user code. The geometry modeller in release 10.3 provides improved algorithms for computing the extent of the geometrical shapes, consequently reducing the memory required for optimizing the geometry setup and providing more efficient tracking. Particle properties have been updated according to PDG-2015 and new floating level base to ions is now introduced. Electromagnetic physics processes parameters are now being handled all in a single class and can be modified via interactive UI commands. Corrections of the LPM suppression effect in electron/positron bremsstrahlung are included. The tuning of the Fritiof hadronic model has been set to the stable version of the model, based on the measurements made by LHC experiments for the hadronic showers. The new release also includes refinements and extensions of the intra-nuclear cascade models (Bertini- like and INCL), as well as improvements both in code structure and physics of nuclear de-excitation and radioactive-decay.
A major version of ROOT 6, version 6.08, has been released for 2017 data taking and analysis. The new ROOT includes, among many other things, a major upgrade of the ROOT interpreter engine which allows to seamlessly interface with the most modern compilers, essential to squeeze out the last drops of performance out of recent computer architectures.
The support for the expression of parallelism, both following a multithreading and multiprocessing approaches, was significantly developed. Several operations ROOT performs are now implicitly parallelised, i.e. without user intervention. Explicit parallelism is more accessible to analysers that want to speed up the study of their datasets controlling the parallelisation of their algorithms relying on ROOT. Analysis is the main focus: more tools to express analyses in form of configuration files or command line arguments have been added as well as a complete integration with the Jupyter technology.
Thinking about the future, a lot of effort was put in the modernisation of the ROOT interfaces. An exciting process of development, prototyping and engagement of the users' community of experiments started to deliver new ROOT modern interfaces which will allow to further increase the productivity of the scientific community.
Machine Learning is pretty developed in ROOT. This year there were a number of major new developments. A significant update of the Toolkit for Multivariate Analysis (TMVA) happened in version 6.08, offering many new features. These include a new deep learning library that supports both NVIDIA and INTEL graphical processing units, cross-validation and hyperparameter-tuning algorithms, unsupervised learning features and interactive training and new visualizations with Jupyter notebooks. Machine learning regression capabilities have been extended and new interfaces to external machine learning tools added, such as Keras. The user interface has also been upgraded, making TMVA output more friendly. In the coming year, the plan is to continue expanding machine learning tools in ROOT, and working closely with the Inter-experimental LHC Machine Learning (IML) working group to identify areas of priority in machine learning software and tools for experiments.
The AIDA-2020 project is part of the Horizon-2020 European framework and its aim is to advance the detector technologies both on the hardware as well as the software side. The Advanced Software work-package of the AIDA-2020 project contains a number of development tasks related to the core, simulation and reconstruction software. EP/SFT is responsible for coordinating the work-package as well as in the development of two new software toolkits:
- the first of these is for modelling complex detector geometries (USolids/VecGeom) and a first version has now been released. The different algorithms used during the navigation within the geometrical structure exploit vectorisation and they show a significant performance gain with respect to existing implementations. The new library can be used from the latest releases of Geant4 and ROOT, so that its impact can be evaluated by the experiments in realistic applications.
- the second is the Event Data Model toolkit which supports the creation of models with high-performance I/O capabilities. This is achieved by basing the design on 'Plain Old Data’ concepts (POD). The first production quality version is now available and is currently used by the Future Circular Collider collaboration, whilst other LHC and Linear Collider communities are evaluating it.
The CERN Service for Web based ANalysis, SWAN, took off. It is now used by several tens of users daily and it demonstrated to shine in several occasions when it was time to deliver successful tutorials. During the Summer Students' ROOT workshop, the CERN School of Computing or the recent Statistic Academic Training, SWAN was the infrastructure used for the hands-on exercises and was able to serve up to 250 users simultaneously without showing any alteration of performance and responsiveness. A great collaboration between the SFT and IT-ST groups that delivered a high quality product!
This year was the 6th year EP-SFT has participated as a mentoring organisation in the Google Summer of Code (GSoC 2016). It was a very productive experience for students and their mentors. A number of innovative open-source software projects were completed in areas of particle and detector simulation, data analysis tools, graphics and machine learning. Two of the students working on machine learning projects were highlighted for their work by Google, and they were invited to present their projects at the Interexperimental LHC Machine Learning (IML) working group meeting in September. Next year EP-SFT is exploring the possibility of becoming a GSoC umbrella organisation via the HEP Software Foundation, possibly extending the reach of the program to the wider HEP community. We look forward to another productive Google Summer of Code program in 2017!