Vanessa Harrar received the Award of Excellence – Poster Presentation category for her presentation entitled: « Audio-visual multiple object tracking » at the 23rd Vision Health Research Network Annual Meeting, November 7, 2017
Audio-visual multiple object tracking
Vanessa Harrar1,2, Eugenie Roudaia1,2 and Jocelyn Faubert1,2
Visual Psychophysics and Perception Laboratory, School of Optometry, University of Montreal, Montreal, QC, Canada
NSERC-Essilor Industrial Research Chair, Montreal, QC, Canada
The ability to track objects as they move is critical for successful interaction with objects in the world. The multiple object tracking (MOT) paradigm has demonstrated that, within limits, our visual attention capacity allows us to track multiple moving objects among distracters. Very little is known about dynamic auditory attention and the role of multisensory binding in attentional tracking. Here, we examined whether dynamic sounds congruent with visual targets could facilitate tracking in a 3D-MOT task. Participants tracked one or multiple target-spheres among identical distractor-spheres during 8 seconds of movement in a virtual cube. In the visual condition, targets were identified with a brief colour change, but were then indistinguishable from the distractors during the movement. In the audio-visual condition, the target-spheres were accompanied by a sound, which moved congruently with the change in the target’s position. Sound amplitude varied with distance from the observer and inter-aural amplitude difference varied with azimuth. Preliminary results with one target showed that performance was better in the audiovisual condition, which suggests that congruent sounds can facilitate attentional visual tracking. However, with multiple targets, the sounds did not facilitate tracking. This suggests that audiovisual binding may not be possible when attention is divided between multiple targets.