Bruno Raffin

I am:

Ph.D. students I really enjoy(ed) to work with:

  • Sofya Dymchenko. Started in 2022
  • Lucas Meyer. Started in 2020 (co-advised with Alejandro Ribes)
  • Amal Gueroudji. Started in 2020 (co-advised with Julien Bigot)
  • Sebastian Friedemann. Defended in 2022
  • Estelle Dirand. Defended in 2018 (co-advised with Laurent Colombet)
  • Julio Toss. Defended in 2017 (co-advised with Joao Comba)
  • Marwa Sridi. Defended in 2016 (co-advised with Thierry Gautier and Vincent Faucher)
  • Joao Lima. Defended in 2014 (co-advised with Thierry Gautier and Nicolas Maillard)
  • Matthieu Dreher. Defended in 2015
  • Marie Durand. Defended in 2013 (co-advised with François Faure).
  • Benjamin Petit. Defended in 2011 (co-advised with Edmond Boyer).
  • Marc Tchiboukdjian. Defended in 2010 (co-advised with Denis Trystram and Vincent Danjean).
  • Everton Hermann. Defended in 2010 (co-advised with Francois Faure)
  • Jean-Denis Lesage. Defended in 2009.
  • Clement Menier. Defended in 2007 (co-advised with Edmond Boyer).
  • Jeremie Allard. Defended in 2005.
  • Jesus Verduzco. Defended in 2005.

My current research activities focus on Data Analysis for High Performance Computing. These last years I have focused on ensemble simulations coupled with data analysis. I develop the Melissa framework for in transit analysis of ensemble runs and investigate solutions for Sensibity Analysis, Data Assimilation, On-line Deep Surrogate Training, Simulation Based Inference. I have been experimenting task based programming and work stealing for in situ processsing with Intel TBB or Dask, or workflows based on FlowVR to combine in situ and in transit processing. I have been also leading the INRIA Challenge on the convergence between HPC, Big Data and ML.

Multi-core architectures (CPUs and GPUs) offer an unique opportunity for High Performance Interactive Computing, but taking full benefit of their performance is challenging given the number and heterogeneity of computing units and the complexity of the memory hierarchy. I have been working on cache efficient data structures (for static meshes but also moving particules) relying on cache oblivious approaches to improve the performance of parallel executions on NUMA machines as well as GPUs. Load balancing is a key feature to enable efficient resource usage when dealing with irregular parallelism. KAAPI, a highly optimized parallel runtime developped in our team, is my favorite environement to support parallel adaptive algorithms. In particular, I have been working on enabling seamless executions on hybrid multi-CPUs and multi-GPUs architectures by relying on a tailored work stealing algorithm to dynamically schedule task either on CPUs or GPUs depending on resource activity and memory affinity criteria. One target application is the SOFA physics engine.

I also worked on real-time multi-camera 3D modeling, full-body interaction and telepresence, that we experimented on the Grimage platform, then replaced by the 50+ camera platform Kinovis (Equipex). See the succesfull demos at Siggraph 2007 and Siggraph 2009 as well as this telepresence expriment presented at ACMM10. Developping and running these very advanced appications on distributed GPU clusters was only possible using the FlowVR framework that we developped.

Initial works in virtual reality (early 2000's) led to the development of the Net Juggler and SoftGenLock Libraries. Net Juggler distributes graphics rendering on a PC cluster. SoftGenLock enables active stereo on a Linux PC cluster with commodity graphics cards. These two software peices contributed to the shift from dedicated SGI machines used at the time to power immersive environement to commodity PCs.

From 1999 to 2001 I was Assitant Professor at LIFO, Université d'Orléans. I taught cryptography and network security, computer architecture, parallel programming, networking, object programming and operating systems classes.

I worked almost two years (98-99) at Iowa State University on parallel computer performance evaluation and taught one calculus classe per semester. Research work was done with Pr. Glenn R. Luecke in close collaboration with Cray and SGI.

I obtained a Computer Science Ph.D. from Université d'Orléans in 1997, advised by Bernard Virot and Robert Azencott. Research work focused on structured parallel progamming (formal approach based on operational/denotational semantics).