Higgs boson decay into a Z boson and a photon and time resolution of the CMS electromagnetic calorimeter

The thesis focuses on Higgs boson physics, specifically one of its rare and yet unobserved decay channels: the decay into a Z boson and a photon (Zgamma channel). This decay not only complements our understanding of the Higgs boson but also uniquely involves all currently known neutral bosons (Higgs, Z, photon) and is sensitive to potential processes beyond the Standard Model. The final state of the analysis consists of the two lepton decay products from the Z boson (muons or electrons for this study) and a photon. Background events produced by other Standard Model processes that contain two leptons and a photon (or misidentified particles) form the background of the analysis. With all data gathered during LHC Run 2 (2015-2018) and Run 3 (2021-2026), it is possible to have evidence of this decay, that is to observe it with a statistical significance exceeding three standard deviations.

In addition, the thesis includes an instrumental part focused on optimizing the time resolution of the CMS electromagnetic calorimeter (ECAL). Although designed for precise energy measurements, the ECAL also shows excellent timing resolution for photons and electrons (approximately 150 ps in collisions, 70 ps in test beam conditions). In a final state populated by photons from multiple overlapping events (pileup), the arrival time of a photon helps to verify its compatibility with the Higgs boson decay vertex. This will be crucial during the high-luminosity phase of the LHC (2029 onward), when the number of overlapping events is expected to be about three times greater than today. A new readout electronics for the ECAL is being developed and will be installed in the ECAL and CMS during the duration of the thesis. The new electronics achieves a timing resolution of 30 ps for high-energy photons and electrons. This performance was tested in ideal beam conditions (no magnetic fields, no tracker material in front of ECAL, no pileup). The thesis aims to develop algorithms to maintain this performance within CMS.

The thesis work is a continuation of the ongoing Z? analysis within the CMS group at CEA Saclay and the timing performance analysis of the ECAL, where the Saclay group is a leader. Simple, robust, and efficient analysis tools written in modern C++ and leveraging the ROOT analysis framework allow to understand and contribute to every stage of the analysis, from raw data to published results. The CMS Saclay group has leading responsibilities in CMS since its construction, including deep expertise in Higgs physics, electron and photon reconstruction, detector simulation, and machine learning and artificial intelligence techniques.

Regular trips to CERN are proposed for presenting the results of this work to the CMS collaboration and for participating in laboratory tests planned for the new ECAL electronics, as well as for participating to its installation.

Mining LEP data for fragmentation: A TMD-oriented analysis of pi+pi- pairs in e+e- collisions

This project aims to advance our understanding of quark and gluon fragmentation by performing the first-ever extraction of Transverse-Momentum-Dependent Fragmentation Functions (TMDFFs) for charged pions using archived data from LEP experiments like DELPHI or ALEPH.
Fragmentation Functions, which describe how partons form detectable hadrons, are non-perturbative and must be determined from experimental data. TMDFFs provide more detailed information about the transverse momentum of these hadrons. An ideal process to study them is the production of back-to-back pi+pi- pairs in electron-positron annihilations, a measurement surprisingly absent from both past and current experiments.
The project will leverage CERN OpenData initiative to access this historical data. The work is structured in three key steps: first, overcoming the technical challenge of accessing the data using potentially obsolete software; second, extracting relevant physical distributions, such as the transverse momentum of the pion pairs; and third, using Monte Carlo simulations (e.g., Pythia8) to interpret the results.
A crucial part of the analysis will be to identify the observables most sensitive to TMDFFs through simulations. The final data analysis will employ modern techniques to ensure a robust estimate of all uncertainties. Once completed, this pioneering measurement will be incorporated into a global analysis of TMD data, significantly improving the accuracy of TMDFFs and pushing the boundaries of our knowledge of non-perturbative QCD.

Machine Learning-Based Algorithms for Real-Time Standalone Tracking in the Upstream Pixel Detector at LHCb

This PhD aims to develop and optimize next-generation track reconstruction capabilities for the LHCb experiment at the Large Hadron Collider (LHC) through the exploration of advanced machine learning (ML) algorithms. The newly installed Upstream Pixel (UP) detector, located upstream of the LHCb magnet, will play a crucial role from Run 5 onward by rapidly identifying track candidates and reducing fake tracks at the earliest stages of reconstruction, particularly in high-occupancy environments.

Achieving fast and highly efficient tracking is essential to fulfill LHCb’s rich physics program, which spans rare decays, CP-violation studies in the Standard Model, and the characterization of the quark–gluon plasma in nucleus–nucleus collisions. However, the increasing event rates and data complexity expected for future data-taking phases will impose major constraints on current tracking algorithms, especially in heavy-ion collisions where thousands of charged particles may be produced per event.

In this context, we will investigate modern ML-based approaches for standalone tracking in the UP detector. Successful applications in the LHCb VELO tracking system already demonstrate the potential of such methods. In particular, Graph Neural Networks (GNNs) are a promising solution for exploiting the geometric correlations between detector hits, allowing for improved tracking efficiency and fake-rate suppression, while maintaining scalability at high multiplicity.

The PhD program will first focus on the development of a realistic GEANT4 simulation of the UP detector to generate ML-suitable datasets and study detector performance. The next phase will consist in designing, training, and benchmarking advanced ML algorithms for standalone tracking, followed by their optimization for real-time GPU-based execution within the Allen trigger and reconstruction framework. The most efficient solutions will be integrated and validated inside the official LHCb software stack, ensuring compatibility with existing data pipelines and direct applicability to Run-5 operation.

Overall, the thesis will provide a major contribution to the real-time reconstruction performance of LHCb, preparing the experiment for the challenges of future high-luminosity and heavy-ion running.

Impact of irradiation parameters on the alpha’ phase formation in oxide dispersion strengthened steels

Ferritic-martensitic oxide dispersion strengthened steels (ODS steels) are materials of great interest in the nuclear industry. Predominantly composed of iron and chromium, these materials can become brittle due to the precipitation of a chromium-rich phase, called a', under irradiation. This phase, known to be sensitive to irradiation conditions, provides an ideal topic for a deeper exploration of the capability to emulate neutron irradiation with ions. Indeed, while ion irradiations are frequently used to understand phenomena observed during neutron irradiations, the question of their representativeness is often raised.

In this thesis, we aim to understand how the irradiation parameters can affect the characteristics of the a' phase in ODS steels. To do so, various ODS steels will be irradiated under different conditions (flux, dose, temperature, and type of particles, such as ions, neutrons, electrons), and subsequently analyzed at the nanoscale. The a' phase (size, chromium content) obtained for each ion irradiation condition will be compared to the one after neutron irradiation.

The MINI-BINGO demonstrator: advancing the quest to unveil the neutrino nature

BINGO is an innovative neutrino physics project designed to lay the groundwork for a large-scale bolometric experiment dedicated to the search for neutrinoless double beta decay. The goal is to achieve an extremely low background index—on the order of 10^-5 counts/(keV·kg·yr)—while delivering excellent energy resolution in the region of interest. These performance levels will enable the exploration of lepton number violation with unprecedented sensitivity.

The project relies on scintillating bolometers, which are particularly effective at rejecting the dominant background caused by surface alpha particles. It focuses on two highly promising isotopes, 100Mo and 130Te, whose complementary properties make them both strong candidates for future large-scale investigations.

BINGO introduces three major innovations to the well-established heat-light hybrid bolometer technology. First, the sensitivity of the light detectors will be enhanced by an order of magnitude through the use of Neganov-Luke amplification. Second, a novel detector assembly design will reduce surface radioactivity contributions by at least an order of magnitude. Third, and for the first time in a macrobolometer array, an internal active shield made of ultrapure BGO scintillators with bolometric light readout will be implemented to suppress external gamma background.

As part of this thesis work, the student will take part in the assembly and installation of the MINI-BINGO demonstrator within the cryostat recently installed at the Modane Underground Laboratory. He/she will be involved in data acquisition and analysis, and will contribute to evaluating the final background rejection enabled by the performance of the detector's final configuration.

Optimization of gamma radiation detectors for medical imaging. Time-of-flight positron emission tomography

Introduction
Innovative functional imaging technologies are contributing to the CEA's ‘Medicine for the Future’ priority. Positron emission tomography (PET) is a nuclear medical imaging technique widely used in oncology and neurobiology. The decay of the radioactive tracer emits positrons, which annihilate into two photons of 511 keV. These photons are detected in coincidence and used to reconstruct the distribution of tracer activity in the patient's body.
We're proposing you to contribute to the development of an ambitious, patented technology: ClearMind. The first prototype is in our laboratories. This gamma photon detector uses a monolithic scintillating crystal of high density and atomic number, in which Cherenkov and scintillation photons are produced. These optical photons are converted into electrons by a photoelectric layer and multiplied in a MicroChannel plate. The induced electrical signals are amplified by gigahertz amplifiers and digitized by SAMPIC fast acquisition modules. The opposite side of the crystal will be fitted with a matrix of silicon photomultiplier (SiPM).
Today we have our first prototype, and we are preparing two more.

The proposed work
You will work in an advanced instrumentation laboratory in a particle physics environment .
The first step will be to optimize the "components" of ClearMind detectors, in order to achieve nominal performance. We'll be working on scintillating crystals, optical interfaces, photoelectric layers and associated fast photodetectors (MCP-PMT and SiPM), and readout electronics.
We will then characterize the performance of the prototype detectors on our measurement benches, which are under continuous development. The data acquired will be interpreted using in-house analysis software written in C++ and/or Python.
Finally, we will compare the physical behavior of our detectors to Monté-Carlo simulation software (Geant4/Gate).
A particular effort will be devoted to the development of ultra-fast scintillating crystals in the context of a European collaboration.

Supervision
The successful candidate will work under the joint supervision of Dominique Yvon and Viatcheslav Sharyy (DRF/IRFU & BIOMAPS). The CaLIPSO group at IRFU & BIOMAPS specializes in the development and characterization of innovative PET detectors, including detailed detector simulation. As part of the project, we are working closely with IJCLabs in Orsay, which is developing our readout and acquisition electronics, CEA/DM2S, which is working in particular on trusted AI algorithms, CPPM in Marseille, which is evaluating our detectors under PET imaging acquisition conditions, and UMR BIOMAPS (CEA/SHFJ), working on image calculation algorithms.

Requirements
Knowledge of the physics of particle-matter interaction, radioactivity and the principles of particle detectors is essential. A strong interest in instrumentation and laboratory work is recommended. Basic programming skills, e.g. C++, Gate/Geant4 physics simulation software, are important.

Skills acquired
Good knowledge of state-of-the-art particle detector and positron emission tomography technologies. Simulation principles and techniques for particle-matter interaction and detection systems. Analysis of complex data.

Contact
Dominique Yvon, dominique.yvon@cea.fr
Viatcheslav Sharyy, viatcheslav.sharyy@cea.fr

CUPID-Stage I: Detector optimization and analysis in the context of a next generation 0nbb search

The CUPID experiment (CUORE Upgrade with Particle IDentification) aims to achieve unprecedented sensitivity for the detection of neutrinoless double beta decay (0nßß) using an array of 1596 lithium molybdate (Li2MoO4) crystals of ~450 kg mass. If detected this process would be a direct observation new physics in the lepton sector: in example it violates lepton number by 2 units. Dependent on the model it can provide valuable insight into the neutrino mass-scale and possbily to matter generation in the Universe through leptogenesis.

The use of lithium molybdate for this study is particularly advantageous due to their scintillation properties and the high Q-value of the decay process, which lies above most environmental gamma backgrounds. The CUPID experiment employs this material as cryogenic calorimetric detectors, where the heat signal from particle interactions of O (100 microK/MeV) are registered in a sensitive thermistor at a temperature of ~10 mK. Thanks to the high Q-value Mo-100 features a particularly high sensitivity in terms of large phase space factor and nuclear transition matrix element. This will also allow for precision studies and tests of the standard model, through analyses of the shape of another process: the so-called 2 neutrino double beta decay (2nbb), which is a standard model allowed process. However, this rare process (half-life of 7x10^17yr) is not only an interesting particle/nuclear physics target, it is also expected to contribute the most important background in CUPID: the random coincidence of two events adding up in energy to the Q-value of the 0nßß search.

CUPID aims to deploy its new detector array in two phases: An initial detector array with 1/3 of the mass will be deployed by 2030. In the mean time several tower scale measurement and optimization campaigns during the time of this thesis project will allow to analyze and optimize the detector performance of the CUPID detector modules. The further suppression of this so called pile-up background through detector optimization (acting on the sensor attachment of the light detector with a robotic assembly station developed at CEA) and advanced analysis techniques within this thesis will allow to enhance the sensitivity and science reach of CUPID. A further extension of the analysis techniques developed in this thesis to the processing of an array of O(1000) detectors will be tested with the existing TeO2 detecor array of CUORE. In the context of this process the developed analysis techniques will contribute to the final science analyses of CUORE, the leading experiment for 0nßß search with Te-130.

Top