High mobility mobile manipulator control in a dynamic context
The development of mobile manipulators capable of adapting to new conditions is a major step forward in the development of new means of production, whether for industrial or agricultural applications. Such technologies enable repetitive tasks to be carried out with precision and without the constraints of limited workspace. Nevertheless, the efficiency of such robots depends on their adaptation to the variability of the evolutionary context and the task to be performed. This thesis therefore proposes to design mechanisms for adapting the sensory-motor behaviors of this type of robot, in order to ensure that their actions are appropriate to the situation. It envisages extending the reconfiguration capabilities of perception and control approaches through the contribution of Artificial Intelligence, here understood in the sense of deep learning. The aim is to develop new decision-making architectures capable of optimizing robotic behaviors for mobile handling in changing contexts (notably indoor-outdoor), and for carrying out a range of precision tasks.
Scalability of the Network Digital Twin in Complex Communication Networks
Communication networks are experiencing an exponential growth both in terms of deployment of network infrastructures (particularly observed in the gradual and sustained evolution towards 6G networks), but also in terms of machines, covering a wide range of devices ranging from Cloud servers to lightweight embedded IoT components (e.g. System on Chip: SoC), and including mobile terminals such as smartphones.
This ecosystem also encompasses a variety of software components ranging from applications (e.g. A/V streaming) to the protocols from different communication network layers. Furthermore, such an ecosystem is intrinsically dynamic because of the following features:
- Change in network topology: due, for example, to hardware/software failures, user mobility, operator network resource management policies, etc.
- Change in the usage/consumption ratio of network resources (bandwidth, memory, CPU, battery, etc.). This is due to user needs and operator network resource management policies, etc.
To ensure effective supervision or management, whether fine-grained or with an abstract view, of communication networks, various network management services/platforms, such as SNMP, CMIP, LWM2M, CoMI, SDN, have been proposed and documented in the networking literature and standard bodies. Furthermore, the adoption of such management platforms has seen broad acceptance and utilization within the network operators, service providers, and the industry, where the said management platforms often incorporate advanced features, including automated control loops (e.g. rule-based, expert-system-based, ML-based), further enhancing their capability to optimize the performance of the network management operations.
Despite the extensive exploration and exploitation of these network management platforms, they do not guarantee an effective (re)configuration without intrinsic risks/errors, which can cause serious outage to network applications and services. This is particularly true when the objective of the network (re)configuration is to ensure real-time optimization of the network, analysis/ tests in operational mode (what- if analysis), planning updates/modernizations/extensions of the communication network, etc. For such (re)configuration objectives, a new network management paradigm has to be designed.
In the recent years, the communication network research community started exploring the adoption of the digital twin concept for the networking context (Network Digital Twin: NDT). The objective behind this adoption is to help for the management of the communication network for various purposes, including those mentioned in the previous paragraph.
The NDT is a digital twin of the real/physical communication network (Physical Twin Network: PTN), making it possible to manipulate a digital copy of the real communication network, without risk. This allow in particular for visualizing/predicting the evolution (or the behavior, the state) of the real network, if this or that network configuration is to be applied. Beyond this aspect, the NDT and the PTN network exchange information via one or more communication interfaces with the aim of maintaining synchronized states between the NDT and the PTN.
Nonetheless, setting up a network digital twin (NDT) is not a simple task. Indeed, frequent and real-time PTN-NDT synchronization poses a scalability problem when dealing with complex networks, where each network information is likely to be reported at the NDT level (e.g. a very large number of network entities, very dynamic topologies, large volume of information per node/per network link).
Various scientific contributions have attempted to address the question of the network digital twin (NDT). The state-of-the-art contributions focus on establishing scenarios, requirements, and architecture for the NDT. Nevertheless, the literature does not tackle the scalability problem of the NDT.
The objective of this PhD thesis is to address the scalability problem of network digital twins by exploring new machine learning models for network information selection and prediction.
Defense of scene analysis models against adversarial attacks
In many applications, scene analysis modules such as object detection and recognition, or pose recognition, are required. Deep neural networks are nowadays among the most efficient models to perform a large number of vision tasks, sometimes simultaneously in case of multitask learning. However, it has been shown that they are vulnerable to adversarial attacks: Indeed, it is possible to add to the input data some perturbations imperceptible by the human eye which undermine the results during the inference made by the neural network. However, a guarantee of reliable results is essential for applications such as autonomous vehicles or person search for video surveillance, where security is critical. Different types of adversarial attacks and defenses have been proposed, most often for the classification problem (of images, in particular). Some works have addressed the attack of embedding optimized by metric learning, especially used for open-set tasks such as object re-identification, facial recognition or image retrieval by content. The types of attacks have multiplied: some universal, other optimized on a particular instance. The proposed defenses must deal with new threats without sacrificing too much of the initial performance of the model. Protecting input data from adversarial attacks is essential for decision systems where security vulnerabilities are critical. One way to protect this data is to develop defenses against these attacks. Therefore, the objective will be to study and propose different attacks and defenses applicable to scene analysis modules, especially those for object detection and object instance search in images.
Learning world models for advanced autonomous agent
World models are internal representations of the external environment that an agent can use to interact with the real world. They are essential for understanding the physics that govern real-world dynamics, making predictions, and planning long-horizon actions. World models can be used to simulate real-world interactions and enhance the interpretability and explainability of an agent's behavior within this environment, making them key components for advanced autonomous agent models.
Nevertheless, building an accurate world model remains challenging. The goal of this PhD is to develop methodology to learn world models and study their use in the context of autonomous driving, particularly for motion forecasting and developing autonomous agents for navigation.
Accelerating thermo-mechanical simulations using Neural Networks --- Applications to additive manufacturing and metal forming
In multiple industries, such as metal forming and additive manufacturing, the discrepancy between the desired shape and the shape really obtained is significant, which hinders the development of these manufacturing techniques. This is largely due to the complexity of the thermal and mechanical processes involved, resulting in a high computational simulation time.
The aim of this PhD is to significantly reduce this gap by accelerating thermo-mechanical finite element simulations, particularly through the design of a tailored neural network architecture, leveraging theoretical physical knowledge.
To achieve this, the thesis will benefit from a favorable ecosystem at both the LMS of École Polytechnique and CEA List: internally developed PlastiNN architecture (patent pending), existing mechanical databases, FactoryIA supercomputer, DGX systems, and 3D printing machines. The first step will be to extent the databases already generated from finite element simulations to the thermo-mechanical framework, then adapt the internally developed PlastiNN architecture to these simulations, and finally implement them.
The ultimate goal of the PhD is to demonstrate the acceleration of finite element simulations on real cases: firstly, through the implementation of feedback during metal printing via temperature field measurement to reduce the gap between the desired and manufactured geometry, and secondly, through the development of a forging control tool that achieves the desired geometry from an initial geometry. Both applications will rely on an optimization procedure made feasible by the acceleration of thermo-mechanical simulations.