Semantic Gaze Mapping in Eye Tracking with Glasses

Objects within a scene are not fixed

Eyetracking glasses have the advantage that they allow gaze data to be collected in real world settings.  The natural environment is captured by the in-built HD scene camera, and the resulting video shows a moving scene.  With a screen-based eye tracker the stimuli presented to all participants is often the same and the locations of eye movements on Areas of Interest (AOIs) can easily be identified automatically.  The AOIs within the stimuli always appear in the same position in relation to the screen and analysis is automatic.  This is not the case for mobile eye tracking, as the position of AOIs change as the participant moves.


This is why we need Semantic Gaze Mapping (SGM)

SGM is the tool for identifying where the fixations are in relation to the AOIs.  Fixations in the video recording are manually mapped by the researcher onto AOIs in a static Reference Image.  A frame from the video or a photo upload can be used as the Reference Image.  Note, the Reference Image can be quite flexible e.g. you could use a floor-plan to show navigation around a series of rooms or an image of words describing the AOIs rather than pictures of them. The key requirement is that the Reference Image can be used to present the data visually as well as empirically.


The SGM process

The fixations in the video can be positioned on the AOIs by clicking the matching location on the Reference Image.  Locating all the fixations in this way allows accurate visualisations and quantitative data to be presented. Unwanted fixations (those that do not fall on AOIs) can be ignored making the process quick and easy.

Back to list of blog entries