Topologic optimization of µLED's optical performance
The performance of micro-LEDs (µLEDs) is crucial for micro-displays, a field of expertise at the LITE laboratory within CEA-LETI. However, simulating these components is complex and computationally expensive due to the incoherent nature of light sources and the involved geometries. This limits the ability to effectively explore multi-parameter design spaces.
This thesis proposes to develop an innovative finite element method to accelerate simulations and enable the use of topological optimization. The goal is to produce non-intuitive designs that maximize performance while respecting industrial constraints.
The work is divided into two phases:
Develop a fast and reliable simulation method by incorporating appropriate physical approximations for incoherent sources and significantly reducing computation times.
Design a robust topological optimization framework that includes fabrication constraints to generate immediately realizable designs.
The expected results include optimized designs for micro-displays with enhanced performance and a methodology that can be applied to other photonic devices.
Characterization of motor recovery in stroke patients during a BCI-guided rehabilitation
Brain-computer interfaces (BCIs) make it possible to restore lost functions by allowing individuals to control external devices through the modulation of their brain activity. The CEA has developed a BCI technology based on the WIMAGINE implant, which records brain activity using electrocorticography (ECoG), along with algorithms for decoding motor intentions. This technology was initially tested for controlling robotic effectors such as exoskeletons and spinal cord stimulation devices to compensate for severe motor impairments. While this initial paradigm of substitution and compensation is promising, a different application potential is now emerging: functional recovery through BCI-guided rehabilitation. Current literature suggests that BCIs, when used intensively and in a targeted manner, can promote neural plasticity and, in turn, improve residual motor abilities. In particular, ECoG-based implanted BCIs could offer significant therapeutic outcomes. The objective of this thesis is therefore to assess the potential of CEA's BCI technology to enhance patients' residual motor functions through neural plasticity.
This work will be approached through a rigorous and multidisciplinary scientific methodology, including a comprehensive review of the scientific literature, the setup and execution of experimentations with patients, the algorithmic development of tools for monitoring and analyzing patient progress, and the publication of significant results in high-level scientific journals.
This PhD is intended for a student specializing in biomedical engineering, with expertise in signal processing and the analysis of complex physiological data, as well as experience in Python or Matlab. A strong interest in clinical experimentation and neuroscience will also be required. The student will work within a multidisciplinary team at CLINATEC, contributing to cutting-edge research in the field of BCIs.
Advancing Semantic Representation, Alignment, and Reasoning in Multi-Agent 6G Communication Systems
Semantic communications is an emerging and transformative research area, where the focus shifts from transmitting raw data to conveying meaningful information. While initial models and design solutions have laid foundational principles, they often rest on strong assumptions regarding the extraction, representation, and interpretation of semantic content. The advent of 6G networks introduces new challenges, particularly with the growing need for multi-agent systems where multiple AI-driven agents interact seamlessly.
In this context, the challenge of semantic alignment becomes critical. Existing literature on multi-agent semantic communications frequently assumes that all agents share a common understanding and interpretation framework, a condition rarely met in practical scenarios. Misaligned representations can lead to communication inefficiencies, loss of critical information, and misinterpretations.
This PhD research aims to advance the state-of-the-art by investigating the principles of semantic representation, alignment, and reasoning in multi-AI agent environments within 6G communication networks. The study will explore how agents can dynamically align their semantic models, ensuring consistent interpretation of messages while accounting for differences in context, objectives, and prior knowledge. By leveraging techniques from artificial intelligence, such as machine learning, ontology alignment, and multi-agent reasoning, the goal is to propose novel frameworks that enhance communication efficiency and effectiveness in multi-agent settings. This work will contribute to more adaptive, intelligent, and context-aware communication systems that are key to the evolution of 6G networks.
Enhancing Communication Security Through Faster-than-Nyquist Transceiver Design
In light of the growing demand for transmission capacity in communication networks, it is essential to explore innovative techniques that enhance spectral efficiency while maintaining the reliability and security of transmission links. This project proposes a comprehensive theoretical modeling of Faster-Than-Nyquist (FTN) systems, accompanied by simulations and numerical analyses to evaluate their performance in various communication scenarios. The study will aim to identify the necessary trade-offs to maximize transmission rates while considering the constraints related to implementation complexity and transmission security, a crucial issue in an increasingly vulnerable environment to cyber threats. This work will help identify opportunities for capacity enhancement while highlighting the technological challenges and adjustments necessary for the widespread adoption of these systems for critical and secure links.
Numerical simulation of the impact between immersed structures in a compressible liquid using immersed boundary type approaches.
Many industrial systems involve structures immersed in dense fluids. Examples include the submarine industry, or, more specifically, certain 4th generation nuclear reactors using coolant fluids such as sodium or salt mixtures. The effect of the interaction of the surrounding fluid on the contact forces between structures is a phenomenon of primary importance, particularly during accidental transient scenarios that can generate large displacements of structures whose residual integrity must be demonstrated for safety purposes.
In the context of this thesis, we are particularly interested in modeling the rapid impact of a structural fragment immersed in a fluid against a wall, resulting, for example, from an explosive phenomenon in a nuclear reactor vessel cooled by sodium. In this context, the sodium, modeled as a compressible fluid, is treated numerically using a volume-finite approach. The reactor's internal structures are treated using a finite-element approach. In order to deal with large structural displacements and possible fracturing, “immersed boundary” techniques are used for fluid-structure interaction.
The aim of this thesis is to define an innovative numerical method to better simulate the fluid film between two structures that come into contact in this context. Initially, we will focus on identifying the physical characteristics of the flow at the level of the fluid film (compressibility, viscosity, etc.) that have the greatest influence on the kinematics of the structures. Secondly, the main challenge of this thesis will be to improve current numerical methods in order to represent the flow characteristics of the fluid film as accurately as possible.
The proposed thesis will be carried out at CEA Saclay, in close collaboration with the EM2C laboratory at CentraleSupélec, within the environment of the Université Paris-Saclay. The PhD student will be immersed in a team with recognized expertise in transient simulations of fluid-structure interaction.
Monte Carlo methods for sensitivity to geometry parameters in reactor physics
The Monte Carlo method is considered to be the most accurate approach for simulating neutron transport in a reactor core, since it requires no or very few approximations and can easily handle complex geometric shapes (no discretisation is involved). A particular challenge for Monte Carlo simulation in reactor physics applications is to calculate the impact of a small model change: formally, this involves calculating the derivative of an observable with respect to a given parameter. In a Monte-Carlo code, the statistical uncertainty is considerably amplified when calculating a difference between similar values. Consequently, several Monte Carlo techniques have been developed to estimate perturbations directly. However, the question of calculating perturbations induced by a change in reactor geometry remains fundamentally an open problem. The aim of this thesis is to investigate the advantages and shortcomings of existing geometric perturbation methods and to propose new ways of calculating the derivatives of reactor parameters with respect to changes in its geometry. The challenge is twofold. Firstly, it will be necessary to design algorithms that can efficiently calculate the geometric perturbation itself. Secondly, the proposed approaches will have to be adapted to high-performance computing environments.
Impact of power histories on the decay heat of spent nuclear fuel
Decay heat is the energy released by the disintegration of radionuclides present in spent fuel. Precise knowledge of its average value and range of variations is important for the design and safety of spent fuel transport and storage systems. Since this information cannot be measured exhaustively, numerical simulation tools are used to estimate the nominal value of decay heat and quantify its variations due to uncertainties in nuclear data.
In this PhD, the aim is to quantify the variations in decay heat induced by reactor operating data, particularly power histories, which are the instantaneous power of fuel assemblies during their residence in the core. This task presents a particular challenge as the input data are no longer scalar quantities but time-dependent functions. Therefore, a surrogate model of the scientific computing tool will be developed to reduce computation time. The global modeling of the problem will be carried out within a Bayesian framework using model reduction approaches coupled with multifidelity methods. Bayesian inference will ultimately solve an inverse problem to quantify uncertainties induced by power histories.
The doctoral student will join the Nuclear Projects Laboratory of the IRESNE institute at CEA Cadarache. He/she will develop skills in neutron simulation, data science, and nuclear reactors. He/she will be given the opportunity to present his/her work to various audiences and publish it in peer-reviewed journals.
Impact of synthesis on the modeling of sodium storage mechanisms in hard carbon
Sodium-ion (Na-ion) batteries are attracting considerable interest as a credible alternative to the lithium-ion batteries widely used today. The abundance of sodium, together with the potential use of electrode materials without critical elements in their composition, has led to intensified research into Na-ion batteries. Hard carbon (HC) has been identified as the most suitable negative electrode for this technology. However, there is no consensus on the mechanisms for storing sodium in HC, because the many precursors and synthesis methods lead to singularly different HCs, which obviously do not function in the same way. A large database provides relationships between synthesis parameters (precursor, washing, pre-treatment, pyrolysis, grinding) and HC properties (porosity, structure, morphology, surface chemistry, defects), but it does not explain them. Consequently, the approach envisaged in this thesis is a multiphysics modeling of HC performance to understand the influence of precursor and synthesis method, exploiting the large existing characterization database.
Modeling of Critical Heat Flux Using Lattice Boltzmann Methods: Application to the Experimental Devices of the RJH
The Lattice Boltzmann Methods (LBM) are numerical techniques used to simulate transport phenomena in complex systems. They allow for the modeling of fluid behavior in terms of particles that move on a discrete grid (a "lattice"). Unlike classical methods, which directly solve the differential equations of fluids, LBM simulates the evolution of distribution functions of fluid particles in a discrete space, using propagation and collision rules. The choice of the lattice in LBM is a crucial step, as it directly affects the accuracy, efficiency, and stability of the simulations. The lattice determines how fluid particles interact and move within space, as well as how the discretization of space and time is performed.
LBM methods exhibit natural parallelism properties, as calculations at each grid point are relatively independent. Although classical CFD methods based on the solution of the Navier-Stokes equations can also be parallelized, the nonlinear terms can make parallelism more difficult to manage, especially for models involving turbulent flows or irregular meshes. Therefore, LBM methods allow, at a lower computational cost, to capture complex phenomena. Recent work has shown that it is possible, with LBM, to reproduce the Nukiyama cooling curve (boiling in a vessel) and thus accurately calculate the critical heat flux. This flux corresponds to a mass boiling, known as the boiling crisis, which results in a sudden degradation of heat transfer.
The critical heat flux is a crucial issue for the Jules Horowitz Reactor, as experimental devices (DEX) are cooled by water in either natural or forced convection. Therefore, to ensure proper cooling of the DEX and the safety of the reactor, it is essential to ensure that, within the studied parameter range, the critical heat flux is not reached. It must therefore be determined with precision.
In the first part of the study, the student will define a lattice to apply LBM methods on an RJH device in natural convection. The student will then consolidate the results by comparing them with available data. Finally, exploratory calculations in forced convection (from laminar to turbulent flow) will be conducted.