Yannick at Movin’On Summit

Earlier this summer, Yannick Roy a PhD student in Faubert Lab, was invited to attend the Movin’On Summit with a special pass to access all sessions and to then participate to a panel, a podcast and write a column in a book, telling his opinion on the Future Technologies for Sustainable Mobility.

He attended various sessions on Artificial Intelligence (AI), Drones, Self-Driving Cars, Carless Cities, Electrics Cars, Last-Mile Delivery, etc.

« I walk away from the Summit with a new appreciation of the hard problem of transitioning into the beautiful future we envision. Our current cities sit on decades of small and major iterations giving us the infrastructure we have today. As electric cars, self-driving cars and drones have been knocking at the door for a couple of years, one could ask: « what’s missing to see them everywhere? ». Infrastructure and legislation are clearly two major factors. » – Yannick Roy

Here is his closing remark.

Thanks to Michelin for this great opportunity.

Postdoc position for classification of cognitive and affective states using eye tracking in the context of a training task.

Postdoc position for classification of cognitive and affective states using eye tracking in the context of a training task.

Faubert Lab is looking for a Postdoc to work on the detection and classification of cognitive and affective states via eye tracking data during pilot training tasks. The Postdoc will explore both online and offline solutions as well as traditional machine learning and novel deep learning approaches. The Postdoc will also collaborate with other researchers exploring different modalities (e.g. EEG, Facial Recognition) for cognitive and affective states classification. Multimodal models will have to be developed to take advantages of the synergy between different sensors and measurements.

Faubert Lab expertise is in vision and perception and we are looking for someone with programming skills coming from a technical background such as engineering or computer science. Preferably with experience in machine learning and/or deep learning.

Working conditions:
This project is part of an NSERC industrial collaborative research and development grant with CAE, a world leader in flight simulation and training. 1 year position, with a salary of $40k.

Please send a curriculum vitae and a letter outlining research experience and career goals to faubertlab [at]

For more information on Faubert Lab, please visit

We thank all applicants for applying for this position, but only those considered for an interview will be contacted.

How do you « Revolutionize Pilot Training » for the US Air Force?

The U.S. Air Force is short 2,000 pilots and needs to train new aviators faster but with the same quality. To do this, the USAF has committed to revolutionizing pilot training with the launch of the “Pilot Training Next” (PTN) initiative.

The AFWERX Innovation Challenge for PTN received almost150 proposals from leading companies around the world. Sixteen were selected as finalists to compete in the solution showcase shootout in Austin, TX with the award going to five companies. NAI and their flagship product, NeuroTracker was selected based on its scientific validation of cognitive assessment and performance.

NeuroTracker is a world leading cognitive assessment, training and rehabilitation platform in use by Special Operations Forces (SOF) across the world to improve focus, situational awareness, predictive agility and decision-making under pressure.

Former Air Force T-38 Instructor Pilot and NAI Advisory Board Member, Bob O’Malley commented on the initiative explaining:

“I am impressed by the approach taken by the US Air Force and SAIC with its Pilot Training Next initiative. Involving commercially available technologies and techniques to innovate its pilot training program is aggressive, but timely. Employing tools like NeuroTracker will improve pilot cognitive skills allowing the Air Force to produce pilots faster and better”

Read more details here.

« If you’re going to lift a weight with your arm in the gym, why wouldn’t you go work your brain out with your eyes? »

It’s no secret that the Washington Capitals won the 2017-2018 Stanley Cup.

Alexander Ovechkin made sure that the whole world would know they won, after all those years being eliminated early in the playoffs despite having one of the best hockey players of this generation (and all time!).

However, Ovechkin didn’t do it all by himself. Support players also had to step up to accomplish the impossible – winning the Stanley cup. And stepping up, Tom Wilson certainly did!

Sports Illustrated recently wrote about Tom Wilson recent evolution and highlight the importance of training his brain and vision. You can the Sports Illustrated post here: Tom Wilson Evolves Into a Powerful Sidekick for Capitals’ Star-Studded Offense


My dad always said, ‘If you’re going to lift a weight with your arm in the gym, why wouldn’t you go work your brain out with your eyes?’

-Tom Wilson


(source: Sports Illustrated)


Tom Wilson has been using the NeuroTracker on a regular basis to train his cognitive and visual abilities. Looking at his results this year, it seems to have helped quite a bit! Obviously, many other factors can be the cause of this evolution, but we’re happy to see that our research is being used by the winners of the Stanley Cup!

Tom Wilson funnily describes the NeuroTracker as “a bunch of balls bouncing around your monitor like an old Microsoft screensaver.” Indeed the NeuroTracker is a very simple task on the outside, yet with a lot of science and literature on the inside. Having a « simple task » allows us (the laboratory and its collaborators) to have a better scientific control over the effects we’re studying.


NeuroTracker is a technology developed by Prof. Jocelyn Faubert’s lab.

How is LeBron James always one move ahead? Let’s ask the scientists.

On May 18th (2018), Sally Jenkins posted an article on The Washington Post about LeBron James’ ability to read the game.

They interviewed Prof. Jocelyn Faubert to know if he was as impressed as everybody else and what he had to say on LeBron’s skills.


« To manage all those systems, that is a form of intelligence […] and we shouldn’t be afraid to say that »

-Prof. Faubert.


Read the article here:

« How is LeBron James always one move ahead? Let’s ask the scientists. »

(source: The Washington Post)

Faubert Lab at the Montreal Grand Prix!

During the Montréal Grand Prix, Jocelyn Faubert, holding the NSERC – ESSILOR industrial chair on visual functions, and his students offered the Grand Prix’ attendees a driving simulation to raise awareness about the key role of vision in safe driving behaviors. In partnership with Essilor Canada, this activity is part of a larger initiative from Essilor and the Fédération Internationale de l’Automobile to promote the importance of a good vision for road safety.

More than 375 visitors experienced the simulation and exchanged with the students about key visual components for drivers and the importance of having a high-quality vision.


Grand Prix de Montréal


You can read the original news on the Optometry School website. (French only)

Discover our most recent papers

The Influence of the External Signal Modulation Waveform and Frequency on the Performance of a Photonic Forced Oscillator – Noemi Sánchez-Castro, Martha Alicia Palomino-Ovando, Denise Estrada-Wiese, Nydia Xcaret Valladares, Jesus Antonio del Río, Maria Beatriz de la Mora, Rafael Doti, Jocelyn Faubert and Jesus Eduardo Lugo

Three-dimensional multiple object tracking in the pediatric population: the NeuroTracker and its promising role in the management of mild traumatic brain injury – Laurie-Ann Corbin-Berrigan,  Kristina Kowalski,  Jocelyn Faubert,  Brian Christie , Isabelle Gagnon

Above and beyond driving abilities: toward a single index – Romain Chaumillon, Delphine Bernardin, Jesse Michaels, Jocelyn Faubert

Training with a three-dimensional multiple object-tracking (3D-MOT) paradigm improves attention in students with a neurodevelopmental condition: a randomized controlled trial – D. Tullo, J. Guy, J. Faubert, A. Bertone

Three students from the Faubert Laboratory won scholarships awarded by the Réseau de Recherche en Sécurité Routière

Congratulations to Jesse Michaels, Romain Chaumillon et Sergio Mejia Romero, recipients of the 2017-2018 Student Supplements Scholarship Competition, for the following projects :

Jesse Michaels : Effets d’un entraînement des habiletés perceptivo-cognitives sur les capacités de conduite automobile

Romain Chaumillon : Au-delà d’une bonne vision : influence de la qualité de l’information visuelle sur les mécanismes de réallocation attentionnelle au cours de la conduite automobile

Sergio Mejia Romero : Real visual cone analysis in driver’s for the evaluation of perception strategies in driving simulators

Graduate studies opportunities

Faubert Lab is looking for graduate students with programming skills and experience in machine learning and/or deep learning.

If you are interested in:

– Eye Tracking

– Electroencephalography (EEG)

– Brain-Computer Interfaces

– Perception, Attention and Visual Working Memory

– Classification of Cognitive and Affective states

check out these opportunities:




Vanessa Harrar received the Award of Excellence – Poster Presentation category for her presentation entitled: « Audio-visual multiple object tracking » at the 23rd Vision Health Research Network Annual Meeting, November 7, 2017

Audio-visual multiple object tracking
Vanessa Harrar1,2, Eugenie Roudaia1,2 and Jocelyn Faubert1,2
  1. Visual Psychophysics and Perception Laboratory, School of Optometry, University of Montreal, Montreal, QC, Canada
  2. NSERC-Essilor Industrial Research Chair, Montreal, QC, Canada
The ability to track objects as they move is critical for successful interaction with objects in the world. The multiple object tracking (MOT) paradigm has demonstrated that, within limits, our visual attention capacity allows us to track multiple moving objects among distracters. Very little is known about dynamic auditory attention and the role of multisensory binding in attentional tracking. Here, we examined whether dynamic sounds congruent with visual targets could facilitate tracking in a 3D-MOT task. Participants tracked one or multiple target-spheres among identical distractor-spheres during 8 seconds of movement in a virtual cube. In the visual condition, targets were identified with a brief colour change, but were then indistinguishable from the distractors during the movement. In the audio-visual condition, the target-spheres were accompanied by a sound, which moved congruently with the change in the target’s position. Sound amplitude varied with distance from the observer and inter-aural amplitude difference varied with azimuth. Preliminary results with one target showed that performance was better in the audiovisual condition, which suggests that congruent sounds can facilitate attentional visual tracking. However, with multiple targets, the sounds did not facilitate tracking. This suggests that audiovisual binding may not be possible when attention is divided between multiple targets.