About us
Espace utilisateur
Education
INSTN offers more than 40 diplomas from operator level to post-graduate degree level. 30% of our students are international students.
Professionnal development
Professionnal development
Find a training course
INSTN delivers off-the-self or tailor-made training courses to support the operational excellence of your talents.
Human capital solutions
At INSTN, we are committed to providing our partners with the best human capital solutions to develop and deliver safe & sustainable projects.
Thesis
Home   /   Post Doctorat   /   Design of in-memory high-dimensional-computing system

Design of in-memory high-dimensional-computing system

Electronics and microelectronics - Optoelectronics Emerging materials and processes for nanotechnologies and microelectronics Engineering sciences Technological challenges

Abstract

Conventional von Neumann architecture faces many challenges in dealing with data-intensive artificial intelligence tasks efficiently due to huge amounts of data movement between physically separated data computing and storage units. Novel computing-in-memory (CIM) architecture implements data processing and storage in the same place, and thus can be much more energy-efficient than state-of-the-art von Neumann architecture. Compared with their counterparts, resistive random-access memory (RRAM)-based CIM systems could consume much less power and area when processing the same amount of data. This makes RRAM very attractive for both in-memory and neuromorphic computing applications.

In the field of machine learning, convolutional neural networks (CNN) are now widely used for artificial intelligence applications due to their significant performance. Nevertheless, for many tasks, machine learning requires large amounts of data and may be computationally very expensive and time consuming to train, with important issues (overfitting, exploding gradient and class imbalance). Among alternative brain-inspired computing paradigm, high-dimensional computing (HDC), based on random distributed representation, offers a promising way for learning tasks. Unlike conventional computing, HDC computes with (pseudo)-random hypervectors of D-dimension. This implies significant advantages: a simple algorithm with a well-defined set of arithmetic operations, with fast and single-pass learning that can benefit from a memory-centric architecture (highly energy-efficient and fast thanks to a high degree of parallelism).

Laboratory

Département Composants Silicium (LETI)
Service des Composants pour le Calcul et la Connectivité
Laboratoire de Composants Mémoires
Top envelopegraduation-hatlicensebookuserusersmap-markercalendar-fullbubblecrossmenuarrow-down