- 12th Jun 2013
- Integrated E-prime / The Observer project available
- 5th Dec 2012
- EthoVision 9.0 released
- 13th Nov 2012
- EEG-eyetracker integration with Emotiv Toolkit
Our new brochure is now available:
FaceReader is designed to automatically recognise and log the six basic expressions; happy, sad, angry, surprised, scared and disgusted. In additional, FaceReader can also recognise a 'neutral' state. It will also log other information from the face such as age, gender, ethnicity and the presence of facial hair and glasses.
FaceReader 4.0 automatically logs specific facial changes e.g. mouth open / closed or eyebrows raised. New features include;
- Increased accuracy using new 3D modelling algorithms
- Massive increase in facial tracking points from 55 to 491
- Quick analysis of multiple videos using the new batch option
- API that allows other programs to react instantly to emotions
How does it work?
Using still images, video clips or a live feed, FaceReader creates a map of the face and uses the movement of key points on this surface to identify facial expression changes.
Data is displayed in real time, represented as a continuous signal or bar chart. Once a subject has been tracked the system will recognise them automatically in subsequent trials.
Data can be exported directly to The Observer XT for comprehensive data integration e.g. combination with tasks, physiological measurements and eyetracking.
Features and Benefits
Automatic recognition of key expressions
|Reduce time collecting behavioural data|
Integrated with other data sources such as other behaviours, physiological measurements and eyetracking via The Observer XT
Part of a complete solution for your your research needs