While not used in any commercially available editing software, the technology exists to derive emotion from the image content. I’ll point to three currently active projects: Emotient, Affectiva and Intraface.
Emotient – purchased by Apple in January 2016 – uses artificial intelligence to read emotion through the analysis of facial expressions. Emotient has three primary APIs:
Attention – Is your advertising or product getting noticed?
Engagement – Are people responding emotionally?
Sentiment – Are they showing positive, negative or no emotion?
What Apple plans to do with this technology is unknown, but it would generate some excellent metadata for, say, Keyword Ranges.
Affectiva, a Wlatham, Mass company, has facial recognition software that can accurately determine the difference between a happy smile, an embarrassed smile and a smirk! The company is now marketing its facial-expression analysis software for market analysis research – gauging customer reaction – to improve designs and marketing campaigns.
The software “makes it possible to measure audience response with a scene-by-scene granularity that the current survey-and-questionnaire approach cannot,” Mr. Hamilton said. A director, he added, could find out, for example, that although audience members liked a movie over all, they did not like two or three scenes. Or he could learn that a particular character did not inspire the intended emotional response.
Intraface is a research project within the University of Pittsburgh that provides face tracking, pose and gaze and expression analysis. It’s not commercially available other than in a somewhat pointless app called Intraface. (Available in the Apple App Store or Google Play store for free.)
Both these project show the direction we’re heading. If software can detect emotional responses to movies, it can detect emotional performances and – for documentary/reality/news – detect the emotion in the face to help drive editing via derived metadata.