Elliptic Flow of Charmed Hadrons in Heavy-Ion Collisions at LHCb?

The FLOALESCENCE project explores one of the most fundamental questions in Quantum Chromodynamics (QCD): how quarks and gluons transition from a deconfined Quark–Gluon Plasma (QGP) into ordinary hadrons.?This transition, called hadronization, occurred microseconds after the Big Bang and can be recreated today in ultra-relativistic lead–lead collisions at CERN’s Large Hadron Collider (LHC).
The PhD will focus on charm quarks—excellent probes of the QGP because they are produced early in the collision and interact throughout its evolution. Using the LHCb detector, uniquely sensitive in the forward rapidity region, the project aims to measure the elliptic flow (v2) of charmed baryons (?c+) and mesons (D0) in Pb–Pb collisions.?The goal is to test whether these heavy quarks thermalize and hadronize through a coalescence mechanism, a key feature of QGP dynamics.

Objectives and tasks:
- Extract and analyze ?c+ and D0 signals in newly collected 2024–2025 Pb–Pb datasets at LHCb.
- Implement a novel flow analysis method (based on the reformulated Lee–Yang Zeros approach) for the first time at LHCb.
- Develop an event-by-event multiplicity metric to correlate flow with system energy density.
- Compare results to theoretical models and cross-check with measurements at central rapidity (ALICE).
- Publish results and present findings at international conferences.

The successful candidate will:
- Develop advanced data-analysis expertise with CERN’s LHCb software framework, ROOT, and machine learning–based signal extraction.
- Gain in-depth knowledge of QCD and relativistic heavy-ion physics, especially QGP properties and collective phenomena.
- Learn modern statistical methods for flow analysis and uncertainty estimation.
- Acquire collaborative and communication skills within a major international experiment (LHCb), including presentations in collaboration meetings and conferences.
- Build strong experience in scientific computing, big-data handling, and detector physics, valuable for both academic and industry careers.

Probing quantum information with the top quark at the LHC

This PhD project aims to explore the quantum nature of top-quark pair production at the Large Hadron Collider by studying spin correlations and entanglement-related observables in data recorded by the ATLAS experiment. The recent breakthrough observations of entanglement in top-antitop events have opened an entirely new window onto the quantum structure of fundamental interactions, transforming the LHC into a machine to test quantum information at the TeV scale. Building on this momentum, the thesis will focus on reconstructing the quantum state of top-quark pairs using ATLAS Run-3 data, with particular attention to the extraction of spin correlations and entanglement-sensitive observables in challenging high-momentum topologies. By improving reconstruction strategies and carefully assessing detector effects, the aim is to measure quantum properties with good precision and to contribute to understand what quantum information can bring us to our understanding of elementary particles.

Numerical and experimental study of cryogenic refrigeration system for HTS-based nuclear fusion reactors

The challenge of climate change and the promise of CO2-free energy production are driving the development of new nuclear fusion reactor concepts that differ significantly from systems such as ITER or JT60-SA [R1]. These new fusion reactors push the technological boundaries by reducing investment and operating costs through the use of high-temperature magnets (HTS) to confine the plasma [R4]. These HTS promise to achieve high-intensity magnetic fields while operating at higher cooling temperatures, thereby reducing the complexity of cryogenic cooling, which is normally achieved by forced circulation of supercritical helium at approximately 4.5 K (see 1.8 K for WEST/Tore Supra) delivered by a dedicated cryogenic plant.

The pulsed operation of tokamaks induces a temporal variation in the thermal load absorbed by the cooling system. This operating scenario has led to the development of several load smoothing techniques to reduce the amplitude of these thermal load variations, thereby reducing the size and power of the cooling system, with beneficial effects on cost and environmental impact. These techniques use liquid helium baths (at approximately 4 K) to absorb and temporarily store some of the thermal energy released by the plasma pulse before transferring it to the cryogenic installation [R5].

The objective of this thesis is to contribute to the development of innovative concepts for the refrigeration of large HTS systems at temperatures between 5 and 20 K. It will include (1) the modeling of cryogenic system and cryodistribution architectures as a function of the heat transfer fluid temperature, and (2) the exploration of innovative load smoothing techniques in collaboration with the multidisciplinary "Fusion Plant" team of the PEPR SUPRAFUSION project. The first part will involve the development and improvement of 0D/1D numerical tools called Simcryogenics, based on Matlab/Simscape [R6], through the implementation of physical models (closure laws) and the selection of appropriate modeling techniques to analyze and compare suitable architectural solutions. The second part will be experimental and will involve conducting load smoothing experiments using an existing cryogenic loop operating between 8 and 15 K.

This activity will be at the forefront of the nuclear fusion revolution currently underway in Europe [R3, R7] and the United States [R4], addressing a wide range of cryogenic engineering fields such as refrigeration technologies, superfluid helium, thermo-hydraulics, materials properties, system and subsystem design, and the design and execution of cryogenic tests. It will thus be useful for the development of new generations of particle accelerators using HTS magnets.

[R1] Cryogenic requirements for the JT-60SA Tokamak https://doi.org/10.1063/1.4706907]
[R2] Analysis of Cryogenic Cooling of Toroidal Field Magnets for Nuclear Fusion Reactorshttps://hdl.handle.net/1721.1/144277
[R3] https://tokamakenergy.com/our-fusion-energy-and-hts-technology/fusion-energy-technology/
[R4] https://tokamakenergy.com/our-fusion-energy-and-hts-technology/hts-business/
[R5] “Forced flow cryogenic cooling in fusion devices: A review” https://doi.org/10.1016/j.heliyon.2021.e06053
[R6] “Simcryogenics: a Library to Simulate and Optimize Cryoplant and Cryodistribution Dynamics”, 10.1088/1757-899X/755/1/012076
[R7] https://renfusion.eu/
[R8] PEPR Suprafusion https://suprafusion.fr/

Bottom-up synthesis of nanographene and study of their optical and electronic properties

This project is part of an ANR project, which aims to synthesize perfectly soluble and individualized graphene nanoparticles in solution and incorporate them into spin electronics devices. To do this, we will draw on the laboratory's experience in synthesizing and studying the optical properties of graphene nanoparticles to propose original structures to several groups of physicists who will be responsible for studying the optical and electronic properties and manufacturing spin valve-type devices.

SEARCH FOR DIFFUSE EMISSIONS AND SEARCHES IN VERY-HIGH-ENERGY GAMMA RAYS AND FUNDAMENTAL PHYSICS WITH H.E.S.S. AND CTAO

Observations in very-high-energy (VHE, E>100 GeV) gamma rays are crucial for understanding the most violent non-thermal phenomena at work in the Universe. The central region of the Milky Way is a complex region active in VHE gamma rays. Among the VHE gamma sources are the supermassive black hole Sagittarius A* at the heart of the Galaxy, supernova remnants and even star formation regions. The Galactic Center (GC) houses a cosmic ray accelerator up to energies of PeV, diffuse emissions from GeV to TeV including the “Galactic Center Excess” (GCE) whose origin is still unknown, potential variable sources at TeV, as well as possible populations of sources not yet resolved (millisecond pulsars, intermediate mass black holes). The GC should be the brightest source of annihilations of massive dark matter particles of the WIMPs type. Lighter dark matter candidates, axion-like particles (ALP), could convert into photons, and vice versa, in magnetic fields leaving an oscillation imprint in the gamma-ray spectra of active galactic nuclei (AGN).
The H.E.S.S. observatory located in Namibia is composed of five atmospheric Cherenkov effect imaging telescopes. It is designed to detect gamma rays from a few tens of GeV to several tens of TeV. The Galactic Center region is observed by H.E.S.S. for twenty years. These observations made it possible to detect the first Galactic Pevatron and place the strongest constraints to date on the annihilation cross section of dark matter particles in the TeV mass range. The future CTA observatory will be deployed on two sites, one in La Palma and the other one in Chile. The latter composed of more than 50 telescopes will provide an unprecedented scan of the region of the Galactic Center.
The proposed work will focus on the analysis and interpretation of H.E.S.S observations carried out in the Galactic Center region for the search for diffuse emissions (populations of unresolved sources, massive dark matter) as well as observations carried out towards a selection of active galactic nuclei for the search for ALPs constituting dark matter. These new analysis frameworks will be implemented for the CTA data analyses. An involvement in the commissioning of the first MSTs in Chile and in the data analysis for early science are expected.

Study of impurity transport in negative and positive triangularity plasmas

Nuclear fusion in a tokamak is a promising source of energy. However, a question arises: which plasma configuration is most likely to produce net energy? In order to contribute to answering this, during this PhD, we will study the impact of magnetic geometry (comparison between positive and negative triangularity) on the collisional and turbulent transport of tungsten (W). The performance of a tokamak strongly depends on the energy confinement it can achieve. The latter degrades significantly due to turbulent transport and radiation (primarily from W). On ITER, the tolerated amount of W in the core of the plasma is about 0.3 micrograms. Experiments have shown that the plasma geometry with negative triangularity (NT) is beneficial for confinement as it significantly reduces turbulent transport. With this geometry, it is possible to reach confinement levels similar to those of the ITER configuration (H-mode in positive triangularity), without the need for a minimum power threshold and without the associated plasma edge relaxations. However, questions remain: what level of W transport is found in NT compared to a positive geometry? What level of radiation can be predicted in future NT reactors? To contribute to answering these questions, during this PhD, we will evaluate the role of triangularity on impurity transport in different scenarios in WEST. The first phase of the work is experimental. Subsequently, the modeling of impurity transport will be carried out using collisional and turbulent models. Collaboration is planned with international plasma experts in NT configurations, with UCSD (United States) and EPFL (Switzerland).

Impact of magnetohydrodynamic on access and dynamics of X-point radiator regimes (XPR)

ITER and future fusion powerplants will need to operate without degrading too much the plasma facing components (PFC) in the divertor, the peripheral element with is dedicated to heat and particle exhaust in tokamaks. In this context, two key factors must be considered: heat fluxes must stay below engineering limits both in stationary conditions and during violent transient events. An operational regime recently developed can satisfy those two constraints: the X-point Radiator (XPR). Experiments on many tokamaks, in particular WEST which has the record plasma duration in this regime (> 40 seconds), have shown that it allowed to drastically reduce heat fluxes on PFCs by converting most of the plasma energy into photons and neutral particles, and that it also was able to mitigate – or even suppress – deleterious magnetohydrodynamic (MHD) edge instabilities known as ELMs (edge localised modes). The mechanisms governing these mitigation and suppression are still poorly understood. Additionally, the XPR itself can become unstable and trigger a disruption, i.e., a sudden loss of plasma confinement cause by global MHD instabilities.
The objectives of this PhD are: (i) understand the physics at play during the interaction XPR-ELMs, and (ii) optimise the access and stability of the XPR regime. To do so, the student will use the 3D non linear MHD code JOREK, the European reference code in the field. The goal is to define the operational limits of a stable XPR with small or no ELMs, and identify the main actuators (quantity and species of injected impurities, plasma geometry).
A participation to experimental campaigns of the WEST tokamak (operated by IRFM at CEA Cadarache) – and of the MAST-U tokamak operated by UKAEA – is also envisaged to confront numerical results and predictions to experimental measurements.

Point Spread Function Modelling for Space Telescopes with a Differentiable Optical Model

Context

Weak gravitational lensing [1] is a powerful probe of the Large Scale Structure of our Universe. Cosmologists use weak lensing to study the nature of dark matter and its spatial distribution. Weak lensing missions require highly accurate shape measurements of galaxy images. The instrumental response of the telescope, called the point spread function (PSF), produces a deformation of the observed images. This deformation can be mistaken for the effects of weak lensing in the galaxy images, thus being one of the primary sources of systematic error when doing weak lensing science. Therefore, estimating a reliable and accurate PSF model is crucial for the success of any weak lensing mission [2]. The PSF field can be interpreted as a convolutional kernel that affects each of our observations of interest, which varies spatially, spectrally, and temporally. The PSF model needs to be able to cope with each of these variations. We use specific stars considered point sources in the field of view to constrain our PSF model. These stars, which are unresolved objects, provide us with degraded samples of the PSF field. The observations go through different degradations depending on the properties of the telescope. These degradations include undersampling, integration over the instrument passband, and additive noise. We finally build the PSF model using these degraded observations and then use the model to infer the PSF at the position of galaxies. This procedure constitutes the ill-posed inverse problem of PSF modelling. See [3] for a recent review on PSF modelling.

The recently launched Euclid survey represents one of the most complex challenges for PSF modelling. Because of the very broad passband of Euclid’s visible imager (VIS) ranging from 550nm to 900nm, PSF models need to capture not only the PSF field spatial variations but also its chromatic variations. Each star observation is integrated with the object’s spectral energy distribution (SED) over the whole VIS passband. As the observations are undersampled, a super-resolution step is also required. A recent model coined WaveDiff [4] was proposed to tackle the PSF modelling problem for Euclid and is based on a differentiable optical model. WaveDiff achieved state-of-the-art performance and is currently being tested with recent observations from the Euclid survey.

The James Webb Space Telescope (JWST) was recently launched and is producing outstanding observations. The COSMOS-Web collaboration [5] is a wide-field JWST treasury program that maps a contiguous 0.6 deg2 field. The COSMOS-Web observations are available and provide a unique opportunity to test and develop a precise PSF model for JWST. In this context, several science cases, on top of weak gravitational lensing studies, can vastly profit from a precise PSF model. For example, strong gravitational lensing [6], where the PSF plays a crucial role in reconstruction, and exoplanet imaging [7], where the PSF speckles can mimic the appearance of exoplanets, therefore subtracting an accurate and precise PSF model is essential to improve the imaging and detection of exoplanets.

PhD project

The candidate will aim to develop more accurate and performant PSF models for space-based telescopes exploiting a differentiable optical framework and focus the effort on Euclid and JWST.

The WaveDiff model is based on the wavefront space and does not consider pixel-based or detector-level effects. These pixel errors cannot be modelled accurately in the wavefront as they naturally arise directly on the detectors and are unrelated to the telescope’s optic aberrations. Therefore, as a first direction, we will extend the PSF modelling approach, considering the detector-level effect by combining a parametric and data-driven (learned) approach. We will exploit the automatic differentiation capabilities of machine learning frameworks (e.g. TensorFlow, Pytorch, JAX) of the WaveDiff PSF model to accomplish the objective.

As a second direction, we will consider the joint estimation of the PSF field and the stellar Spectral Energy Densities (SEDs) by exploiting repeated exposures or dithers. The goal is to improve and calibrate the original SED estimation by exploiting the PSF modelling information. We will rely on our PSF model, and repeated observations of the same object will change the star image (as it is imaged on different focal plane positions) but will share the same SEDs.

Another direction will be to extend WaveDiff for more general astronomical observatories like JWST with smaller fields of view. We will need to constrain the PSF model with observations from several bands to build a unique PSF model constrained by more information. The objective is to develop the next PSF model for JWST that is available for widespread use, which we will validate with the available real data from the COSMOS-Web JWST program.

The following direction will be to extend the performance of WaveDiff by including a continuous field in the form of an implicit neural representations [8], or neural fields (NeRF) [9], to address the spatial variations of the PSF in the wavefront space with a more powerful and flexible model.

Finally, throughout the PhD, the candidate will collaborate on Euclid’s data-driven PSF modelling effort, which consists of applying WaveDiff to real Euclid data, and the COSMOS-Web collaboration to exploit JWST observations.

References
[1] R. Mandelbaum. “Weak Lensing for Precision Cosmology”. In: Annual Review of Astronomy and Astro- physics 56 (2018), pp. 393–433. doi: 10.1146/annurev-astro-081817-051928. arXiv: 1710.03235.
[2] T. I. Liaudat et al. “Multi-CCD modelling of the point spread function”. In: A&A 646 (2021), A27. doi:10.1051/0004-6361/202039584.
[3] T. I. Liaudat, J.-L. Starck, and M. Kilbinger. “Point spread function modelling for astronomical telescopes: a review focused on weak gravitational lensing studies”. In: Frontiers in Astronomy and Space Sciences 10 (2023). doi: 10.3389/fspas.2023.1158213.
[4] T. I. Liaudat, J.-L. Starck, M. Kilbinger, and P.-A. Frugier. “Rethinking data-driven point spread function modeling with a differentiable optical model”. In: Inverse Problems 39.3 (Feb. 2023), p. 035008. doi:10.1088/1361-6420/acb664.
[5] C. M. Casey et al. “COSMOS-Web: An Overview of the JWST Cosmic Origins Survey”. In: The Astrophysical Journal 954.1 (Aug. 2023), p. 31. doi: 10.3847/1538-4357/acc2bc.
[6] A. Acebron et al. “The Next Step in Galaxy Cluster Strong Lensing: Modeling the Surface Brightness of Multiply Imaged Sources”. In: ApJ 976.1, 110 (Nov. 2024), p. 110. doi: 10.3847/1538-4357/ad8343. arXiv: 2410.01883 [astro-ph.GA].
[7] B. Y. Feng et al. “Exoplanet Imaging via Differentiable Rendering”. In: IEEE Transactions on Computational Imaging 11 (2025), pp. 36–51. doi: 10.1109/TCI.2025.3525971.
[8] Y. Xie et al. “Neural Fields in Visual Computing and Beyond”. In: arXiv e-prints, arXiv:2111.11426 (Nov.2021), arXiv:2111.11426. doi: 10.48550/arXiv.2111.11426. arXiv: 2111.11426 [cs.CV].
[9] B. Mildenhall et al. “NeRF: Representing Scenes as Neural Radiance Fields for View Synthesis”. In: arXiv e-prints, arXiv:2003.08934 (Mar. 2020), arXiv:2003.08934. doi: 10.48550/arXiv.2003.08934. arXiv:2003.08934 [cs.CV].

Study of uranium-235 fission induced by neutrons from 0.5 to 40 MeV at NFS-SPIRAL2 using the FALSTAFF spectrometer and the FIFRELIN code

The presented project has two main objectives. The first one is the realization (building, calibration, data taking and data analysis) of a first experiment with the FALSTAFF detector in its configuration with two detection arms. In such a configuration, FALSTAFF will be able to detect in coincidence both fragments emitted by fast-neutron triggered fission reactions. These neutrons will be provided by the neutron beam of SPIRAL2-NFS in GANIL. The advantage of using direct kinematics is the ability to determine on an event-by-event basis the excitation energy of the fissioning nucleus by the measurement of the incident-neutron kinetic energy.
For this first experiment, we will have a uranium 235 target. 235U is the main source of fission neutrons in nuclear reactors and therefore at the heart of the system. Hence, the understanding of neutron-induced fission of 235U is essential and the rather exclusive data FALSTAFF will provide, with not only the identification of the fission fragments but also their kinematics will permit to reconstruct also the fissioning system. Such a measurmement in direct kinematics have never been done, to our knowledge, with the accuracy we are aiming at.
To perform this exepriment, we have improved and added detection capabilities to the FALSTAFF spectrometer, in particular with the financial support of the Région Normandie over the last two years. This experiment will be completed by a work to be done on a theoretical model developed by our collaborators of CEA-Cadarache. We will compare our detailled data with predictions of the model and have the model evolve, according to the laws of nuclear physics in order to obtain results from the model close to the data. Such a test of this model on as complete data as those we will obtain with FALSTAFF have never been done so far.

Precise time tagging and tracking of leptons in Enhanced Neutrino Beams with large area PICOSEC-Micromegas detectors

The ENUBET (Enhanced NeUtrino BEams from kaon Tagging) project aims to develop a monitored neutrino beam with a precisely known flux and flavor composition, enabling percent-level precision in neutrino cross-section measurements. This is achieved by instrumenting the decay tunnel to detect and identify charged leptons from kaon decays.
The PICOSEC Micromegas detector is a fast, double-stage micro-pattern gaseous detector that combines a Cherenkov radiator, a photocathode, and a Micromegas amplification structure. Unlike standard Micromegas, it operates with amplification also occurring in the drift region, where the electric field is even stronger than in the amplification gap. This configuration enables exceptional timing performance, with measured resolutions of about 12 ps for muons and ~45 ps for single photoelectrons, making it one of the fastest gaseous detectors ever developed.
Integrating large-area PICOSEC Micromegas modules in the ENUBET decay tunnel would provide sub-100 ps timing for lepton tagging, improving particle identification, reducing pile-up, and enhancing the association between detected leptons and their parent kaon decays — a key step toward precision-controlled neutrino beams.
Within the framework of this PhD work, the candidate will optimize and characterize 10 × 10 cm² PICOSEC Micromegas prototypes, and contribute to the design and development of larger-area detectors for the nuSCOPE experiment and the ENUBET hadron dump instrumentation.

Top